Blog

Concord is joining Akamai Technologies

Alex and I started Concord because we realized the need for a flexible and scalable stream processing solution. Over the last two years, we’ve dedicated ourselves to creating the technology we wish we had. And we didn’t do it alone - we did it with the help of all the amazing people who shared our vision along the way.

Today I’m excited to announce that Akamai Technologies has acquired Concord Systems. The Concord team... Read more >

Concord Runway: Browse and deploy vetted connectors

While maintaining the Concord user group and talking to many other Concord users, a common question is often brought to our attention: How does one connect Concord to an external data source or sink? Until now, there wasn't a simple, consistent way of doing this. This was a problem for some Concord users, and was equally troubling for us - it went against our core philosophy that stream processing should be easy enough that... Read more >

Crawling, Parsing, and Indexing Healthcare data with Concord and Elasticsearch

Concord is a distributed stream processing engine written in C++ atop Apache Mesos. The Concord framework makes stream processing accessible to developers from all walks of life (Go, C++, Scala, Ruby, Python), abstracting away the details of distributed systems so that users focus on their business logic rather than cluster operations.

Out friends at Innovatively built and deployed a patent crawling application from zero to production in 4 hours using Concord's... Read more >

Distributed Runtimes: at-least-once is here

Introducing at-least-once processing with Apache kafka!

If you are charging money for processing logs, you simply cannot afford to lose a single data item. At LinkedIn, where Apache Kafka was incubated, they used at-least-once guarantees to create connections, match jobs, and optimize ad display.

If they didn't use an at-least-once system, it would mean dropping a resume, missing a job-match, and ultimately... Read more >

Stream Windowing Performance Analysis: Concord and Spark Streaming

One question we get asked often is “How does Concord compare to [name your stream processing framework here]”? It came up recently when we were talking about real-time alerting with a Principal Architect at a major CDN and a cloud services provider. Our conversation is what prompted us to write... Read more >

Getting Concord on the DC/OS Universe

Deploying distributed systems can be a frustrating task. Each machine in your cluster needs to be provisioned specifically for its role before your programs can even run. Tools such as Docker, Ansible, Chef, etc. do help, but they still exist at a lower level of abstraction than the ideal case. Not only that but each application requires expert level knowledge of its internals to be configured properly. Preferably users would desire a single click solution,... Read more >

Concord's Function Tracing System

Debugging a native distributed application on a client's machine is harder than it sounds. Basically, it is impossible to go from a function backtrace to a system wide diagnosis.

Imagine the scenario of getting a TCP timeout on a function call. The timeout could mean there was a cable cut on the datacenter, power rack outage, a nuclear bomb blew the receiving end, misconfiguration, etc. There are so many layers of abstractions that connecting a... Read more >

Introducing Concord

Concord was born out of need and experience -- the need for a stable, predictable stream processor given rise from painful experiences keeping Storm clusters reliable and efficient at scale. There are a few core points to the philosophy underlying Concord:

  • Stream processing shouldn’t be restricted to distributed systems experts -- application programmers and data scientists should be able to write streaming computations in the languages they’re comfortable with

  • Cluster management should be accessible... Read more >