Apache Kafka: How does it manage data for large federal systems?
March 8, 20217:44 am
1 min read
Best listening experience is on Chrome, Firefox or Safari. Subscribe to Fed Tech Talk’s audio interviews on Apple Podcasts or PodcastOne
Federal systems have not only gotten larger and larger but advances in internet-of-things devices and the proliferation of edge computing have made it difficult to get accurate information in real-time.
Most listeners know that Apache is an open-source system best known for its Apache HTTP server. The Apache Software Foundation is the basis for other projects like Hadoop, Cassandra, and Apache Kafka. Each variation seems to solve a different problem. Apache Kafka solves the problem of accessing large data stores efficiently.
Apache Kafka was first a project developed at LinkedIn and it was released as an open-source project in 2011. The developer enjoyed the work of Kafka and named the project after the influential Bohemian writer. Subsequently, he went on to form the company Confluent.
Will LaForest is the public sector chief technology officer at Confluent. He joined host John Gilroy on this week’s Federal Tech Talkto discuss federal use cases that apply Apache Kafka and how it fits into the Federal Data Strategy. One example he gives is how Apache Kafka is working with the software “factories” that the U.S. Air Force has created.