Testing RippleNet

  Photo by Anastasia Dulgier on Unsplash

When working on large scale systems that have grown over time, setting up a local development environment is a common challenge.  If you're lucky, it takes a few hours to set up remote AWS instances; if AWS isn't an option, it can take days to set up all the network infrastructure. At Ripple, not only is our network unbounded, we also need to simulate networks of varying sizes, which complicates the local environment challenge

RippleNet is a distributed network, which means that the number of possible configurations and testing scenarios can be very large. The large number of permutations makes things like getting good integration testing coverage and debugging a customer issues very difficult. To do either, you need to set up simulations  of the network to mirror the customer's perspective or the use case you're testing. Manually spinning up remote instances every time was not always the best solution to these challenges.

Spin up Networks Locally

The first problem we solved was spinning up multiple RippleNet instances locally, configuring them, and running tests on various APIs exposed by each instance. This provided us the fastest way to simulate complex network configurations, write tests to prove functionality and reproduce customer setups. This also provided an invaluable on-boarding tool for new developers who could spin up a local RippleNet on their first day.

Each RippleNet Instance is essentially a Spring application. When running locally, each server starts on a random port and is assigned a local IP based on an INetAddress exposed from java.net.NetworkInterface.getNetworkInterfaces().

Spinning up instances like this provided several advantages:

  1. We did not need internet access to be able to run tests on networked RippleNet nodes.
  2. We could start the instances directly from the IDE using existing debugging tools.
  3. All Spring servers ran inside a single JVM, which enabled developers to gain access to a server’s application context and introspect on beans as  part of the test. This also helped in adding complex test cases without having to unnecessarily expose APIs.
  4. The tests were environment independent and automatically added to existing CI pipelines.

Spin up Networks Remotely

Although spinning up instances locally proved useful to test end-to-end functionality, the method had a few constraints:

  1. Since the tests were run in a single JVM and relied on the version of code you currently had running in your IDE, all nodes on the simulated network ran the same version of the server. This is far from real-world cases, where customers upgrade with different timelines and the network is expected to run with nodes on different supported versions.
  2. Additionally, to make it easier for developers, the local tests were backed by an H2 database. Again, though, this did not simulate the real world, where our customers use a variety of supported database dialects and versions.

With our growing network, testing across the above two criteria became more urgent and we could no longer rely on our local single-JVM tests. We needed tests to run on servers backed up by a variety of databases.

To solve this, we used Nomad from Hashicorp to dynamically allocate multiple RippleNet nodes and database servers. We then configured each of these nodes with appropriate database credentials that Nomad returned before starting up the servers.

Once the infrastructure was running, we could configure these nodes using their exposed APIs, just like the local tests. Now we could launch networks of any size, running any combination of server versions, backed by a combination of database dialects. The tests and assertions can remain the same.

And, because we used Docker images to deploy the databases and servers, a developer could simulate the same environment on their local environment.

All these techniques added a new level of complexity. We are now dealing with different deployment infrastructure, each with its own interface. Eventually, we converged on a singular homegrown framework, called Topology, which standardizes the way all developers use these tools and makes it easy to integrate with the existing change request workflow. You can read more about Topology here.

As that post describes, we extend Topology to perform arbitrary tasks like:

  • Orchestrating deployment using Kubernetes
  • Orchestrating deployment on static environments
  • Integrating with AWS APIs
  • Deploying specific database schemas

As a simple example, here's how to create a TopologyFactory that defines the various tasks to start up RippleNet with a specific configuration:

// Defining RippleNet Servers
// Abstraction over Spring Server that allows configuring RippleNetServer with static properties
SpringServerResource node =
new SpringServerResource(“rippleNet-node-A”, new RippleNetServer());
node.addProperty("property-name",”property-value”);
Topology graph = new Topology();
topology.addElement(node);
// Define the auth mechanism to be used by this server
topology.addElement(“rippleNet-node-A”, new authMechanismConfigurer());
// Create payment accounts on this node
topology.addElement(“rippleNet-node-A”, new createAccount(“account-name”));
// Start the defined topology with the servers that are defined
topology.start();

As with anything, lowering barriers by keeping testing in the same development language and a part of the developer workflow made maintaining this framework simple. Furthermore, since the framework is available alongside implementation code, extending the tests written against TopologyFactories is low-effort and helps us expand our integration coverage.

We're always looking for more developers to help us solve challenges like these! If you're interested in joining Ripple’s engineering team, check out our open opportunities.

Photo by Anastasia Dulgier on Unsplash