Apache Kafka: How does it manage data for large federal systems?

Best listening experience is on Chrome, Firefox or Safari. Subscribe to Fed Tech Talk’s audio interviews on Apple Podcasts or PodcastOne

Federal systems have not only gotten larger and larger but advances in internet-of-things devices and the proliferation of edge computing have made it difficult to get accurate information in real-time.

Will LaForest, CTO, Public Sector, Confluent

Most listeners know that Apache is an open-source system best known for its Apache HTTP server.  The Apache Software Foundation is the basis for other projects like Hadoop, Cassandra, and Apache Kafka. Each variation seems to solve a different problem. Apache Kafka solves the problem of accessing large data stores efficiently.

Apache Kafka was first a project developed at LinkedIn and it was released as an open-source project in 2011. The developer enjoyed the work of Kafka and named the project after the influential Bohemian writer. Subsequently, he went on to form the company Confluent.

Will LaForest is the public sector chief technology officer at Confluent. He joined host John Gilroy on this week’s Federal Tech Talk to discuss federal use cases that apply Apache Kafka and how it fits into the Federal Data Strategy. One example he gives is how Apache Kafka is working with the software “factories” that the U.S. Air Force has created.

Related Stories

Comments

Federal Tech Talk

TUESDAYS at 1:00 P.M.

Host John Gilroy of The Oakmont Group speaks the language of federal CISOs, CIOs and CTOs, and gets into the specifics for government IT systems integrators. Follow John on Twitter. Subscribe on Apple Podcasts or Podcast One.

Sign up for breaking news alerts