Back to Blog
Best Practice

How we built a robust demo environment for our sales team

With the help of the Service Locator pattern in our React app, we created a container specifically for our demo environment that can pre-load specific API calls, giving sales reps a deterministic, stable, and reliable demo environment.

Ganesh Datta

Ganesh Datta | December 1, 2021

How we built a robust demo environment for our sales team

As a backend developer by practice, my first foray into learning React when I started building the early versions of Cortex was… an interesting experience. Learning React/Redux on the fly, we ended up building a frontend application that was heavily inspired by our own Spring Boot backend.

One of the most “backend”-style patterns that I think we’ve adopted is the service locator pattern (ideally would’ve been Dependency Injection, but I’ll get into that in a bit). 

Early experimentation with React/Redux and Redux Thunk

When we connected Redux and setup thunk to handle async API call lifecycles, the standard practice I found online in tutorials and guides was to make the API call to your backend directly in the “action creator”. An example on creating an action using redux-toolkit (which we’re using to provide standard best practices for using Redux) that makes an API call is:

export const fetchTodos = createAsyncThunk('todos/fetchTodos',
async () => {
  const response = await httpClient.get('/fakeApi/todos')
  return response.todos
})


Examples like these brought a couple questions to mind that I wasn’t able to find solutions for:

  1. How do I write an end-to-end integration test for a component that makes an API call under the hood? Do I mock the action? What happens if the action has some additional logic that I’d like to integration test? Do I need to write tests that just use a headless browser?

  2. How do I hide the implementation of the service from the rest of the code base? For example, if I want to be able to easily swap out the auth provider, how do I abstract away the auth API calls?

  3. How do I handle multiple environments that may have different behavior (demo, prod, on-prem)?

These patterns are pretty well known in the backend world, with Dependency Injection being a standard approach to solving this. Spring Boot (with Kotlin), our framework of choice, leans heavily on DI to make both testing and environment-specific behavior easy to control.

A refresher on Dependency Injection

Before we dive deeper into our eventual solution, a quick primer on dependency injection (DI).

Let’s start with an example scenario – here are the requirements:

  1. We have some API controller that accepts a JPG file and dumps it into file storage

  2. On our staging environment, we want to store this file on the local filesystem, since we care about keeping cost low and don’t care if the files are blown away now and then

  3. On production, we want to store these files in S3

One solution to this is having if/else blocks everywhere:

class MyClass(val s3: S3StorageService, val localStorage: LocalStorageService) {
    fun storeFile(file: File) {
        when (env) {
            "prod" -> s3.storeFile
            else -> ""
        }
    }
}

This has a few challenges:

  1. It’s hard to test

  2. You’re tied to specific implementations of storage in the code

  3. The MyClass has to be aware of Storage types, making the code brittle

A more elegant solution would be for this class to “ask” for a “StorageService”, without caring what kind of Storage implementation it gets – all it needs to know is that it can give the StorageService a file and get an identifier in return. This gives us some quick wins:

  1. The framework can give MyClass the right StorageService implementation depending on the environment. We can tell the framework to spin up an S3StorageService on the cloud instance, and a LocalStorageService on the staging environment. (The framework does this by maintaining a “Container”, which is DI terminology for the global Interface -> Implementation mapping. This container holds the current implementations of the interfaces.)

  2. We can easily test the logic in MyClass by giving it a dummy StorageService, instead of tying MyClass to specific Storage logic

  3. One day, if we move off of S3 or add new environments, we can do so without changing the code of MyClass

With these changes, the class looks much simpler:

class MyClass(val storage: StorageService /* The framework handles the “type” */) {
  fun storeFile(file: File) {
    // my class no longer needs to care about environments, specific storage types, etc.
    // I can focus on the business logic!
    storage.persistFile(file)
  }
}

What does this have to do with React?

Let’s bring this back to the React app we were building. You’ll notice that the problems I outlined earlier and the DI example are pretty similar. So why not solve my original problem with DI?

Some quick searching at the time brought me to InversifyJS, a dependency injection container for Typescript. At first glance, it does seem meant more for backend applications, but when there’s a will there’s a way!

My idea was the following:

  1. All API calls would be abstracted behind interfaces: for example, AuthService, ScorecardsService, etc which make API calls to the backend

  2. A container would hold all of the specific implementations

  3. Redux Actions would ask the container for a specific API interface and use that, instead of making API calls directly. For example: authService.login(“email”, “password”) instead of httpClient.post(“/api/v1/auth”, {email: “email@cortex.io”, password: “1234”}.

The gotcha: Service Locator pattern

One immediate gotcha was that in true DI, every single object is created at the root, so that the framework can conveniently hand along the right implementations of the interfaces. However, because React components are pure functions that are rendered when needed, implementing this approach wasn’t so clear.

Instead, I ended up going with the service locator pattern. Let’s dive into a quick refresher. 

If you recall in the DI example, MyClass magically “received” the right implementation into its constructor, so by the time it was instantiated, it had everything it needed. The framework handled creating the MyClass instance from the root and wiring it up with the right “Dependencies”.

However, in this case, I don’t have a “root” that instantiates all my Redux actions, so I need my actions to be able to explicitly ask for the current implementation. Here’s how that works:

  1. A container holds the current implementations of each Interface. For example, the container in “prod” would hold the S3StorageService.

  2. Anywhere in my codebase, I can ask the container for a specific interface. “Hey container, I’d like a StorageService of some sort”. 

  3. The container gives me whatever has been wired up.

This can be considered an anti-pattern in certain cases (I won’t go into the reasons here), but given my constraints, it was a reasonable solution. Although my code is rigidly tied to the container object, I can still swap out specific implementations of services in different environments.

The impact on our ability to test

Having implemented all of our async actions against interfaces meant that we could easily write end to end integration tests.

For our integration tests, we use react-testing-library, which is based on the JSDOM and actually renders our React components.

We can provide the React app a container full of stubs (we use jest-mock-extended to easily stub our interfaces), letting us mock API responses across the app and write robust integration tests. This lets us move much faster and gives us confidence in our testing suite, without having to worry about setting up something like Selenium or Cypress for now.

The sleeper hit: our demo environment

Any sales rep knows the feeling of absolute despair when you’re half an hour into a critical demo with a customer and the backend for the API demo has an outage – spinners of death, desperately refreshing the page, and making small talk hoping the customer patiently awaits the site coming back up.

In a perfect world, demo environments are always up, releases seldom happen during demos, and releases that do go out have been through a rigorous testing process (one can dream). In an almost-perfect world, your demo environment has critical paths with representative data pre-loaded, so the demo doesn’t crash in the middle of an important meeting.

Immediately, this sounds like an idea that even suggesting will get you killed by the engineering team – can you imagine asking for if(demo) checks everywhere in your codebase? I’d rather not imagine.

Instead, imagine a world in which your demo environment uses the exact same code as your prod environment – with specific API calls modified without the rest of the app having any idea. This is exactly what our container approach unlocked!

We’re now able to wire up a container specifically for our demo environment that can pre-load specific API calls, giving sales reps a deterministic, stable, and reliable demo environment without any code changes to the rest of our production app!

Wrapping it up

I want to close with a huge disclaimer – I’m not an experienced frontend developer by any stretch of the imagination. I’ve likely broken tons of best practices for React development, and my research may have left something to be desired.

Regardless, this approach has helped us deliver new features quickly, have high confidence through end to end integration tests, and provide a reliable demo environment for customers and the sales team.
If you have feedback, thoughts, or questions, please shoot me an email at ganesh@getcortexapp.com! If you’re interested in working on interesting problems like this and building a brand new category in developer tooling, apply to join our team.

Talk to an expert today