Concord Runway: Browse and deploy vetted connectors

While maintaining the Concord user group and talking to many other Concord users, a common question is often brought to our attention: How does one connect Concord to an external data source or sink? Until now, there wasn’t a simple, consistent way of doing this. This was a problem for some Concord users, and was equally troubling for us - it went against our core philosophy that stream processing should be easy enough that the average programmer can use it. After thinking hard about this problem and talking to a number of Concord users, we’ve taken a step towards making it easier to connect data pipelines to Concord - Concord Runway.

Concord Runway is the simplest way to deploy Concord connectors for common distributed systems such as Kafka, Cassandra, and Amazon Kinesis. It is a zero-code solution that allows you to assemble data pipelines from within your terminal. Comprised of an open source repository and a corresponding CLI, the Concord Runway connects a single collection of vetted connectors with a respective distribution platform.

*Concord Runway in action*

We put a lot of effort on making the runway CLI user friendly. Lists of choices presented in ascii tables are easy on the eyes, and connectors can be quickly configured for fast deployments. Configuration can be done via an operator manifest file similar to the deploy command, or via onscreen prompts that will walk you through the setup. To add to this, the repository the CLI uses to fetch metadata from is swappable. This can be handy if you want to control where your images are stored and/or distributed. Contributors may also find this feature convenient when developing new connectors for runway, as they can just point the CLI to their fork of runway.

*Launching a Kafka source operator with Concord Runway*

You can be confident in the operators that exist in the Concord Runway. We take the time to test all connectors for resiliency, configurability and performance. If there are any doubts, the entire source and deployment configuration options are all available publically for your own vetting. With this added confidence and operational convenience, users will be able to quickly assemble simple, high-performance data pipelines. Managing and productionizing your Concord cluster instantly becomes quicker and simpler.

Due to the modular, containerized nature of Concord, operators written in one of the five (C/C++, Java, Python, Ruby, Go) supported languages can exist with others within a Concord cluster. Because of this, developers can choose to engineer their operator in their own way in order to leverage benefits of a particular language or its ecosystem. For example, connectors that place performance considerations first can be written in C/C++. For fast iteration connectors can be written in Python/Ruby; and connectors that wish to leverage APIs only available in Java can be written in any language that runs on the JVM.

Runway gives you the option to manage and deploy your own internal Concord operators and connectors separate from the main github repository. You can keep the same documentation and tooling, while leveraging the advantages of a private repo. Developers wishing to expand the Runway can also take advantage of this feature, pointing the CLI(concord runway -r to their own forked version of Runway when testing their new connectors.

*Private repo example*

We believe a tool like runway further promotes the main philosophy of Concord, that is, stream processing should be simple enough that any developer can easily create their own streaming operators and manage their own topologies. Concord Runway is another piece of our “batteries included” to stream processing.

To get started with the Concord Runway, checkout the README on our github page. To those interested in the CLI you can find the runway CLI source within the Concord cmd repository. For additional help and support, please stop by our user group and say hello!

Special thanks to Alex Gallego, Jack McCloy, Shinji Kim , Igor Vasilcovsky, An Nguyen, Fabian Gambino, and Thomas Billitteri for reading drafts of this post.