Apache Kafka: How does it manage data for large federal systems?

This week on Federal Tech Talk, Will LaForest, public sector CTO at Confluent joins host John Gilroy to discuss federal use cases that apply the event streaming...

Best listening experience is on Chrome, Firefox or Safari. Subscribe to Fed Tech Talk’s audio interviews on Apple Podcasts or PodcastOne

Federal systems have not only gotten larger and larger but advances in internet-of-things devices and the proliferation of edge computing have made it difficult to get accurate information in real-time.

Will LaForest, CTO, Public Sector, Confluent

Most listeners know that Apache is an open-source system best known for its Apache HTTP server.  The Apache Software Foundation is the basis for other projects like Hadoop, Cassandra, and Apache Kafka. Each variation seems to solve a different problem. Apache Kafka solves the problem of accessing large data stores efficiently.

Apache Kafka was first a project developed at LinkedIn and it was released as an open-source project in 2011. The developer enjoyed the work of Kafka and named the project after the influential Bohemian writer. Subsequently, he went on to form the company Confluent.

Will LaForest is the public sector chief technology officer at Confluent. He joined host John Gilroy on this week’s Federal Tech Talk to discuss federal use cases that apply Apache Kafka and how it fits into the Federal Data Strategy. One example he gives is how Apache Kafka is working with the software “factories” that the U.S. Air Force has created.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories