Skip to main content
Category

Case Study

OpenJS in Action: There’s Open Source in Your Credit Card with Neo Financial

By Blog, Case Study, OpenJS In Action

We recently met with Ian Sutherland, engineering lead and Head of Developer Experience at Canadian fintech startup, Neo Financial. Ian has been with Neo Financial from the very beginning and has seen the engineering team grow from 1 employee to over 150 individuals in the last three years. Ian is also a Collaborator on the Node.js project hosted at the OpenJS Foundation. Watch the full interview:

https://youtu.be/pCfM4_jxH0E

What is Neo Financial?

Neo is a financial technology company that is reimagining how Canadians bank. Their first product was a rewards credit card, they later introduced a top rated high-interest savings account and recently launched Neo Invest, the first fully digital, actively managed investment experience. With Neo Invest your portfolio is actively managed by experts and engages a greater range of asset classes and investment strategies than most competitor portfolios.

Developed with JavaScript First

The Neo Financial web banking portal provides seamless mobile-first interactions for users. The backend of the portal is built entirely using JavaScript and Node.js and powers all of the app’s microservices and transaction processing. Ian and the engineering team decided early on that they would use JavaScript for everything they possibly could in developing their product. Ian shared his opinion that “Node.js is the technology of choice for running JavaScript on the server, so from day one, a decision was made that the team would use Node.js.”

Other factors influenced their decision to work with JavaScript and related technologies like Node.js. JavaScript is currently the most widely used programming language, which they felt would make it easier to scale their team of developers quickly. The language also, put simply, works well for them. Their team finds it easy to containerize apps using Node.js. It’s also easy to build serverless functions written in JavaScript running on Node.js, with no compilation step required. It’s fair to say that Node.js provides excellent performance and scalability, keeping their team’s infrastructure costs low. 

Working with Other Open Source Technologies

Using Node.js and JavaScript for local development has worked well for the Neo Financial dev team. Ian shared that they have a swift development experience where a person can change some code and have the project reload instantly. He also cited the npm ecosystem and the “millions” of packages out there as a benefit, helping his team work very productively. They also use TypeScript and Fastify in all of their services, and webpack indirectly through other frameworks.

Open source packages and plugins help speed up their team’s development work. Chances are, someone has already dealt with a similar problem. These packages make it easier to solve whatever the needs are without recreating the wheel.

A Note on Security

As a financial company, security is top of mind for Ian and his team. The Node.js project has also been focusing more on the security of Node.js itself. As a result, the devs at Neo feel very comfortable running it in production.

Contributing to Open Source

On a personal level, Ian has been involved in open source for several years. He started by making smaller contributions and later got involved in the React community. He eventually became a core maintainer of the Create React App and has been working on that project for the last three or four years. Then, about four years ago, he got involved in Node.js itself, primarily as part of a working group called the Tooling Group. The focus of this group is on making Node.js the best tool it can be for building things like CLI tools, or other tools that might run in a CI or build environment, lambdas, etc. 

As a team, the Neo Financial engineers try to do their part. They’ve open sourced developer tools and GitHub actions and try their best to give back wherever they possibly can. In a big thank you to the open source community, Ian said, “We have an awesome community. People are doing development on open source projects for free as volunteers when they contribute lines of code and fixes and documentation.”

We at the OpenJS Foundation feel the same way. We wouldn’t be anywhere without our contributors and our fantastic community. It was a pleasure speaking with Ian, and we’re grateful for his input as an individual and an engineering team leader. 

OpenJS In Action: How Wix Applied Multi-threading to Node.js and Cut Thousands of SSR Pods and Money

By Blog, Case Study, OpenJS In Action
Author: Guy Treger, Sr. Software Engineer, Viewer-Server team, Wix

Background:

In Wix, as part of our sites’ Server-Side-Rendering architecture, we build and maintain the heavily used Server-Side-Rendering-Execution platform (aka SSRE). 

SSRE is a Node.js based multipurpose code execution platform, that’s used for executing React.js code written by front-end developers all across the company. Most often, these pieces of JS code perform CPU-intensive operations that simulate activities related to site rendering in the browser.

Pain: 

SSRE has reached a total traffic of about 1 million RPM, requiring at times largely more than accepted  production Kubernetes pods to serve it properly. 

This made us face an inherent painful mismatch:
On one side, the nature of Node.js – an environment best-suited for running I/O-bound operations on its single-threaded event loop. On the other, the extremely high traffic of CPU oriented tasks that we had to handle as part rendering sites on the server.

The naive solution we started with clearly proved inefficient, causing some ever growing pains in Wix’s server and infrastructure, such as having to manage tens of thousands of production kubernetes pods.

Solution: 

We had to change something. The way things were, all of our heavy CPU work was done by a single thread in Node.js. The straightforward thing that comes to mind is: offload the work to other compute units (processes/threads) that can run parallely on hardware that includes multiple CPU cores.

Node.js already offers multi-processing capabilities, but for our needs this was an overkill. We needed a lighter solution that would introduce less overhead, both in terms of resources required and in overall maintenance and orchestration.

Only recently, Node.js has introduced what it calls worker-threads. This feature has become Stable in the v14 (LTS) released in Oct 2020.

From the Node.js Worker-Threads documentation:

The worker_threads module enables the use of threads that execute JavaScript in parallel. To access it:

const worker = require(‘worker_threads’);

Workers (threads) are useful for performing CPU-intensive JavaScript operations. They do not help much with I/O-intensive work. The Node.js built-in asynchronous I/O operations are more efficient than Workers can be.

Unlike child_process or cluster, worker_threads can share memory.

So Node.js offers a native support for threading that we could use, but since it’s a pretty new thing around, it still lacks a bit in maturity and is not super smooth to use in your production-grade code of a critical application.

What we were mainly missing was:

  1. Task-pool capabilities
    What Node.js offers?
    One can manually spawn some Worker threads and maintain their lifecycle themselves. E.g.:
const { Worker } = require(“worker_threads”);

//Create a new worker
const worker = new Worker(“./worker.js”, {workerData: {….}});

worker.on(“exit”, exitCode => {
  console.log(exitCode);
});


We were reluctant to spawn our Workers manually, make sure there’s enough of them at every given time, re-create them when they die, implement different timeouts around their usage and more stuff that’s generally available for a multithreaded application.

  1. RPC-like inter-thread communication
    What Node.js offers?
    Out-of-the-box, threads can communicate between themselves (e.g. the main thread and its spawned workers) using an async messaging technique:
// Send a message to the worker
aWorker.postMessage({ someData: data })

// Listen for a message from the worker
aWorker.once(“message”, response => {
  console.log(response);
});

Dealing with messaging can really make the code much harder to read and maintain. We were looking for something friendlier, where one thread could asynchronously “call a method” on another thread and just receive back the result.

We went on to explore and test various OS packages around thread management and communication.


Along the way we found some packages that solve both the threadpool problem and the elegant RPC-like task execution on threads. A popular example was piscina. It looked all nice and dandy, but there was one showstopper. 


A critical requirement for us was to have a way for our worker threads to call some APIs exposed back on the main thread. One major use-case for that was reporting business metrics and logs from code running in the worker. Due to the way these things are widely done in Wix, we couldn’t directly do them from each of the workers, and had to go through the main thread.

So we dropped these good looking packages and looked for some different approaches. We realized that we couldn’t just take something off the shelf and plug it in.

Finally, we have reached our final setup with which we were happy.

We mixed and wired the native Workers API with two great OS packages:

  • generic-pool (npmjs) – a very solid and popular pooling API. This one helped us get our desired thread-pool feel.
  • comlink (npmjs) – a popular package mostly known for RPC-like communication in the browser (with the long-existing JS Web Workers). It recently added support for Node.js Workers. This package made inter-thread communication in our code look much more concise and elegant.

The way it now looks is now along the lines of the following:

import * as genericPool from ‘generic-pool’;
import * as Comlink from ‘comlink’;
import nodeEndpoint from ‘comlink/dist/umd/node-adapter’;

export const createThreadPool = ({
                                    workerPath,
                                    workerOptions,
                                    poolOptions,
                                }): Pool<OurWorkerThread> => {
    return genericPool.createPool(
        {
            create: () => {
                const worker = new Worker(workerPath, workerOptions);
                Comlink.expose({
                    …diagnosticApis,
                }, nodeEndpoint(worker));

                return {
                    worker,
                    workerApi: Comlink.wrap<ModuleExecutionWorkerApi>(nodeEndpoint(worker)) };
            },
            destroy: ({ worker }: OurWorkerThread) => worker.terminate(),
        },
        poolOptions
    );
};


And the usage in the web-server level:

const workerResponse = await workerPool.use(({ workerApi }: OurWorkerThread) =>
    workerApi.executeModule({
        …executionParamsFrom(incomingRequest)
    })
);

// … Do stuff with response

One major takeaway from the development journey was that imposing worker-threads on some existing code is by no means straightforward. Logic objects (i.e. JS functions) cannot be passed back and forth between threads, and so, sometimes considerable refactoring is needed. Clear concrete pure-data-based APIs for communication between the main thread and the workers must be defined and the code should be adjusted accordingly.

Results:

The results were amazing: we cut the number of pods by over 70% and made the entire system more stable and resilient. A direct consequence was cutting much of Wix’s infra costs accordingly.

Some numbers:

  • Our initial goal was achieved – total SSRE pod count dropped by ~70%.
    Respectively, RPM per pod improved by 153%.
  • Better SLA and a more stable application – 
    • Response time p50 : -11% 
    • Response time p95: -20%
    • Error rate decreased even further: 10x better
  • Big cut in total direct SSRE compute cost: -21%

Lessons learned and future planning:

We’ve learned that Node.js is indeed suitable also for CPU-bound high-throughput services. We managed to reduce our infra management overhead, but this modest goal turned out to be overshadowed by much more substantial gains.

The introduction of multithreading into the SSRE platform has opened the way for follow-up research of optimizations and improvements:

  • Finding the optimal number of CPU cores per machine, possibly allowing for non-constant size thread-pools.
  • Refactor the application to make workers do work that’s as pure CPU as possible.
  • Research memory sharing between threads to avoid a lot of large object cloning.
  • Apply this solution to other major Node.js-based applications in Wix.

Node.js Case Study: Ryder

By Blog, Case Study, member blog, Node.js

Ryder Delivers Real-Time Visibility in Less Time with Profound Logic’s Node.js solution

This case study was initially published on Profound Logic’s website. Profound Logic is a member of the OpenJS Foundation.

Ryder’s low-code, screen scraping solution was an effective solution for a long time, yet, as their customers’ expectations evolved,  they had an opportunity to upgrade. 

To keep up with consumer demand, they implemented Profound Logic’s Node.js development products to create RyderView. Their new web-based solution helped transform usability for their customers and optimize internal business processes for an overall better experience.

The Challenge

Third-party freight carriers across North America rely on Ryder’s Last Mile legacy systems to successfully deliver packages. Constantly adding features the legacy system made for a a monolithic application that was no longer intuitive nor scalable.

The Solution

The Ryder team, lead by Barnabus Muthu, IT & Application Architect, wanted to develop an intuitive web application that provided real-time access to critical information. Muthu wanted to balance the need for new development with his legacy programs’ extensive business logic.

Profound Logic’s Node.js development solutions were a great fit and allowed Muthu to expose his IBM i databases via API to push and pull data from external cloud systems in real-time. He was also able to drive efficiency on dev time by using npm packages. Using Node.js, Ryder was able to built a modern, web-based application that no longer relied on green screens, while leveraging his existing RPG business logic.

The Result

This new solution was named RyderView and it transformed usability for its customers, translating to faster onboarding and reduced training costs for Ryder.

For third-party users, it led to improved productivity as the entire time-consuming processes were made obsolete. Previously, Ryder’s third-party agents used paper-based templates to capture information while in the field. Now that Ryder’s new application used microservices to push and pull data from iDB2, end users were upgraded to a mobile application. These advancements benefited Ryder as well, allowing them to eliminate paperwork, printing costs, and the licensing of a document processing software.

Read the full case study: https://www.profoundlogic.com/case-study/ryder/

AMP Project Case Study: VOGSY

By AMP, Blog, Case Study

VOGSY Improves Services Firms’ Quote-to-Cash Speed by 80% with AMP-powered dynamic emails

The full case study was originally published on the AMP website. AMP is a hosted project at the OpenJS Foundation

AMP Project with lightening bolt

VOGSY is a professional services automation solution built on Google Workspace. By offering a single source of engagement to efficiently manage projects, resources, tasks, timesheets and billing, VOGSY streamlines services firms’ business operations from quote to cash, preventing handoff delays between sales, project delivery, and accounting teams.

Challenge

VOGSY was facing challenges due to siloed departments and disparate tools. Seeing an opportunity to never drop the quote-to-cash baton, VOGSY implemented AMP For Email to send actionable workflow emails directly to its users’ inboxes.

Results

The results of using the open source project led to huge efficiency gains for VOGSY clients including an 80 percent increase in approval speed for invoices, timesheets, quotes and expenses.  

To read more about the benefits for VOGSY read the full case study: https://amp.dev/success-stories/vogsy/

Dressed to Impress: NET-A-PORTER, Mr Porter and JavaScript Frameworks

By Blog, Case Study, Fastify, OpenJS In Action

For this OpenJS In Action, Robin Glen, Principal Developer for YNAP joined the OpenJS Foundation Director Robin Ginn to discuss their use of JavaScript in building a global brand. YNAP is the parent company of luxury retailer NET-A-PORTER. Glen works within the Luxury division team at NET-A-PORTER (NAP), working on NET-A PORTER and Mr Porter. He has been with NAP for over 10 years. In addition to his work with NAP, Glen is also a member of the Chrome Advisory board.

Glen has been leading the developer team at YNAP for almost a decade, and continues to test, iterate and implement cutting edge open source technologies. For example, he was an early adopter of the Fastify web framework for Node.js to help increase web performance, particularly with the demand spikes his company experiences during holidays and sales.

Topics ranged from ways to make the user experience feel more pleasant and secure, to issues around Javascript bloat. Questions focused on the history of NAP, how NAP chose their current framework, and how that framework allows them to best service customers in their e-commerce site. 

The full interview is available here: OpenJS In Action: NET-A-PORTER, Mr Porter and JavaScript Frameworks 

Timestamps

0:00 Brief Introduction

2:09 Technology and NET-A-PORTER 

3:11 Defining Architectural Moment

4:50 Where YNAP is Today

6:50 Factors in Choosing Technologies? 

10:30 Fastify

14:00 YNAP and JS Foundation

15:10 Looking Forward: Engineering Roadmap  

18:58 What’s a “Good Day At Work” for you?

20:00 Wrap-Up

OpenJS In Action: ESRI powering COVID-19 response with open source

By Blog, Case Study, Dojo, ESLint, Grunt, OpenJS In Action

The OpenJS In Action series features companies that use OpenJS Foundation projects to develop efficient, effective web technologies. 

Esri, a geographic information systems company, is using predictive models and interactive maps with JavaScript technologies to help the world better understand and respond to the recent COVID-19 pandemic. Recently, they have built tools that visualize how social distancing precautions can help reduce cases and the burden on healthcare systems. They have also helped institutions like Johns Hopkins create their own informational maps by providing a template app and resources to extend functionality. 

Esri uses OpenJS Foundation projects such as Dojo Toolkit, Grunt, ESLint and Intern to increase developer productivity and deliver high-quality applications that help the world fight back against the pandemic. 

Esri’s contributions to the COVID response effort and an explanation of how they created the underlying technologies are available at this video: 

https://youtu.be/KLnht-1F3Ao

Robin Ginn, Executive Director of the OpenJS Foundation, spoke with Kristian Ekenes, Product Engineer at Esri, to highlight the work his company has been doing. Esri normally creates mapping software, databases and tools to help businesses manage spatial data. However, Ekenes started work on a tool called Capacity Analysis when the COVID-19 pandemic began to spread. 

Capacity Analysis is a configurable app that allows organizations to display and interact with results from two scenarios predicting a hospital’s ability to meet the demand of COVID-19 patients given configurable parameters, such as the percentage of people following social distancing guidelines. Health experts can create two hypothetical scenarios using one of two models: Penn Medicine’s COVID-19 Hospital Impact Model for Epidemics (CHIME) or the CDC’s COVID-19Surge model. Then they can deploy their own version of Capacity Analysis to view how demand for hospital beds, ICU beds, and ventilators varies by time and geography in each scenario. This tool is used by governments worldwide to better predict how the pandemic will challenge specific areas.

During the interview, Ekenes spoke on the challenges that come with taking on ambitious projects like Capacity Analysis. Esri has both a large developer team and a diverse ecosystem of applications. This makes it difficult to maintain consistency in the API and SDKs deployed across desktop and mobile platforms. To overcome these challenges, Esri utilizes several OpenJS Foundation projects including Dojo Toolkit, Grunt, ESLint and Intern

Ekenes explained that Grunt and ESLint increase developer productivity by providing real-time feedback when writing code. The linter also standardizes work across developers by indicating when incorrect practices are being used. This reduces the number of pull requests between collaborators and saves time for the entire team. Intern allows developers to write testing modules and create high-quality apps by catching bugs early. In short, Esri helps ensure consistent and thoroughly tested applications by incorporating OpenJS Foundation projects into their work. 

From streaming to studio: The evolution of Node.js at Netflix

By Blog, Case Study, Node.js, Project Update

As platforms grow, so do their needs. However, the core infrastructure is often not designed to handle these new challenges as it was optimized for a relatively simple task. Netflix, a member of the OpenJS Foundation, had to overcome this challenge as it evolved from a massive web streaming service to a content production platform. Guilherme Hermeto, Senior Platform Engineer at Netflix, spearheaded efforts to restructure the Netflix Node.js infrastructure to handle new functions while preserving the stability of the application. In his talk below, he walks through his work and provides resources and tips for developers encountering similar problems.

Check out the full presentation 

Netflix initially used Node.js to enable high volume web streaming to over 182 million subscribers. Their three goals with this early infrastructure was to provide observability (metrics), debuggability (diagnostic tools) and availability (service registration). The result was the NodeQuark infrastructure. An application gateway authenticates and routes requests to the NodeQuark service, which then communicates with APIs and formats responses that are sent back to the client. With NodeQuark, Netflix also created a managed experience — teams could create custom API experiences for specific devices. This allows the Netflix app to run seamlessly on different devices. 

Beyond streaming

However, Netflix wanted to move beyond web streaming and into content production. This posed several challenges to the NodeQuark infrastructure and the development team. Web streaming requires relatively few applications, but serves a huge user base. On the other hand, a content production platform houses a large number of applications that serve a limited userbase. Furthermore, a content production app must have multiple levels of security for employees, partners and users. An additional issue is that development for content production is ideally fast paced while platform releases are slow, iterative processes intended to ensure application stability. Grouping these two processes together seems difficult, but the alternative is to spend unnecessary time and effort building a completely separate infrastructure. 

Hermeto decided that in order to solve Netflix’s problems, he would need to use self-contained modules. In other words, plugins! By transitioning to plugins, the Netflix team was able to separate the infrastructure’s functions while still retaining the ability to reuse code shared between web streaming and content production. Hermeto then took plugin architecture to the next step by creating application profiles. The application profile is simply a list of plugins required by an application. The profile reads in these specific plugins and then exports a loaded array. Therefore, the risk of a plugin built for content production breaking the streaming application was reduced. Additionally, by sectioning code out into smaller pieces, the Netflix team was able to remove moving parts from the core system, improving stability. 

Looking ahead

In the future, Hermeto wants to allow teams to create specific application profiles that they can give to customers. Additionally, Netflix may be able to switch from application versions to application profiles as the code breaks into smaller and smaller pieces. 

To finish his talk, Hermeto gave his personal recommendations for open source projects that are useful for observability and debuggability. Essentially, a starting point for building out your own production-level application!  

Personal recommendations for open source projects 

Metrics and alerts: 

Centralized Logging 

Distributed tracing 

Diagnostics 

Exception Management 

Expedia Group: Building better testing pipelines with opensource.

By Blog, Case Study, ESLint, OpenJS In Action

The OpenJS In Action series features companies that use OpenJS Foundation projects to help develop efficient, effective web technologies. 

Software developers at global travel company Expedia Group are using JavaScript, ESLint and robust testing pipelines to reduce inconsistency and duplication in their code. Switching from Java and JSP to Node.js has streamlined development and design systems. Beyond that, Expedia developers are looking into creating a library of reusable design and data components for use across their many brands and pages. 

Robin Ginn, executive director of the OpenJS Foundation, interviewed Tiffany Le-Nguyen, Software Development Engineer at Expedia Group.

Expedia is an example of how adoption of new technologies and techniques can improve customer and developer experiences. 

A video featuring Expedia is available here: https://youtu.be/FDF6SgtEvYY

Robin Ginn, executive director of the OpenJS Foundation, interviewed Tiffany Le-Nguyen, Software Development Engineer at Expedia Group. Le-Nguyen explained how accessibility and performance concerns led developers to modernize Expedia’s infrastructure. One of the choices they made was to integrate ESLint into their testing pipeline to catch bugs and format input before content was pushed live. ESLint also proved to be a huge time-saver — it enforced development standards and warned developers when incorrect practices were being used. 

ESLint was especially useful for guiding new developers through JavaScript, Node.js and TypeScript. Expedia made the bold move to switch most of their applications from Java and JSP to Node.js and TypeScript. Le-Nguyen is now able to catch most errors and quickly push out new features by combining Node.js with Express and a robust testing pipeline. 

However, Expedia is used globally to book properties and dates for trips. Users reserve properties with different currencies across different time zones. This makes it difficult to track when a property was reserved and whether the correct amount was paid. Luckily, Expedia was able to utilize Globalize, an OpenJS project that provides number formatting and parsing, date and time formatting and currency formatting for languages across the world. Le-Nguyen was able to simplify currency tracking across continents by integrating Globalize into the project. 

To end the talk, Le-Nguyen suggested that web developers should take another look into UI testing. Modern testing tools have simplified the previously clunky process. Proper implementation of a good testing pipeline improves the developer experience and leads to a better end product for the user. 

How Node.js saved the U.S. Government $100K

By Blog, Case Study, Node.js, OpenJS World

The following blog is based on a talk given at the OpenJS Foundation’s annual OpenJS World event and covers solutions created with Node.js.

When someone proposes a complicated, expensive solution ask yourself: can it be done cheaper, better and/or faster? Last year, an external vendor wanted to charge $103,000 to create an interactive form and store the responses. Ryan Hillard, Systems Developer at the U.S. Small Business Administration, was brought in to create a less expensive, low-maintenance alternative to the vendor’s proposal. Hillard was able to create a solution using ~320 lines of code and $3000. In the talk below Hillard describes what the difficulties were and how his Node.js solution fixed the problem.

Last year, Hillard started work on a government’s case management system that received and processed feedback from external and internal users. Unfortunately, a recent upgrade and rigorous security measures prevented external users from leaving feedback. Hillard needed to create a secure interactive form and then store the data. However, the solution also needed to be cheap, easy to maintain and stable. 

Hillard decided to use three common services: Amazon Simple Storage Service (S3), Amazon Web Services (AWS) Lambda and Node.js. Together, these pieces provided a simple and versatile way to capture and then store response data. Maintenance is low because the servers are maintained by Amazon. Additionally, future developers can easily alter and improve the process as all three services/languages are commonly used. 

To end his talk, Hillard discussed the design and workflow processes that led him to his solution. He compares JavaScript to a giant toolkit with hundreds of libraries and dependencies — a tool for every purpose. However, this variety can be counterproductive as the complexity – and thus the management time – increases.

Developers should ask themselves how they can solve their problems without introducing anything new. In other words, size does matter — the smallest, simplest toolkit is the best!

Linux Foundation interview with NASA Astronaut Christina Koch

By Announcement, Blog, Case Study, Event, OpenJS World

Jason Perlow, Editorial Director at the Linux Foundation, had a chance to speak with NASA astronaut Christina Koch. This year, she completed a record-breaking 328 days at the International Space Station for the longest single spaceflight by a woman and participated in the first all-female spacewalk with fellow NASA astronaut Jessica Meir. Christina gave a keynote at the OpenJS Foundation’s flagship event, OpenJS World on June 24, 2020, where she shared more on how open source JavaScript and web technologies are being used in space. This post can also be found on the Linux Foundation Foundation blog. 

JP: You spent nearly a year in space on the ISS, and you dealt with isolation from your friends and family, having spent time only with your crewmates. It’s been about three months for most of us isolating at home because of the COVID-19 pandemic. We haven’t been trained to deal with these types of things — regular folks like us don’t usually live in arctic habitats or space stations. What is your advice for people dealing with these quarantine-type situations for such long periods? 

CK: Well, I can sympathize, and it can be a difficult challenge even for astronauts, and it can be hard to work through and come up with strategies for. For me, the #1 thing was making sure I was in charge of the framework I used to view the situation. I flipped it around and instead about thinking about all the things I was missing out on and the things that I didn’t have available to me, I tried to focus on the unique things that I did have, that I would never have again, that I would miss one day. 

So every time I heard that thought in my head, that “I just wish I could…” whatever, I would immediately replace it with “this one thing I am experiencing I will never have again, and it is unique”. 

So the advice I have offered since the very beginning of the stay at home situation has been finding that thing about our current situation that you truly love that you’ll know you will miss. Recognize what you know is unique about this era, whether it is big, or small — whether it is philosophical or just a little part of your day — and just continually focus on that. The biggest challenge is we don’t know when this is going to be over, so we can quickly get into a mindset where we are continually replaying into our heads “when is this going to be over? I just want to <blank>” and we can get ourselves into a hole. If you are in charge of the narrative, and then flip it, that can really help.

I have to say that we are all experiencing quarantine fatigue. Even when it may have been fun and unique in the beginning — obviously, nobody wanted to be here, and nobody hopes we are in this situation going forward, but there are ways we can deal with it and find the silver lining. Right now, the challenge is staying vigilant, some of us have discovered those strategies that work, but some of us are just tired of working at them, continually having to be our best selves and bringing it every day. 

So you need to recommit to those strategies, but sometimes you need to switch it up — halfway through my mission, I changed every bit of external media that was available to me. We have folks that will uplink our favorite TV shows, podcasts, books and magazines, and other entertainment sources. I got rid of everything I had been watching and listening to and started fresh with a new palette. It kind of rejuvenated me and reminded me that there were new things I could feast my mind on and unique sensory experiences I could have. Maybe that is something you can do to keep it fresh and recommit to those strategies. 

JP: I am stuck at home here, in Florida, with my wife. When you were up in the ISS, you were alone, with just a couple of your crewmates. Were you always professional and never fought with each other, or did you occasionally have spats about little things?

CK: Oh my goodness, there were always little spats that could affect our productivity if we allowed it. I can relate on so many levels. Being on the ISS for eleven months, with a lot of the same people in a row, not only working side-by-side but also socializing on the weekends, and during meals at the end of the day. I can relate because my husband and I were apart for almost two years if you take into account my training in Russia, and then my flight. Of course, now, we are together 24 hours a day, and we are both fortunate enough that we can work from home. 

It is a tough situation, but at NASA, we all draw from a skill set called Expeditionary Behavior. It’s a fancy phrase to help us identify and avoid conflict situations and get out of those situations if we find ourselves in them. Those are things like communication — which I know we should all be doing our best at, as well as group living. But there are other things NASA brought up in our training are self-care, team care, leadership, and particularly, followership. Often, we talk about leadership as an essential quality, but we forget that followership and supporting a leader are also very important. That is important in any relationship, whether it is a family, a marriage, helping the other people on your team, even if it is an idea that they are carrying through that is for the betterment of the whole community or something like that. The self-care and team care are really about recognizing when people on your team or in your household may need support, knowing when you need that support, and being OK with asking for it and being clear about what needs you may have.

A common thread among all those lines is supporting each other. One way, in my opinion, the easiest way to get yourself out of feeling sorry for whatever situation you might be in is to think about the situation everyone else is in and what they might need. Asking someone else, “Hey, how are you doing today, what can I do for you?” is another way to switch that focus. It helped me on my mission, and it is helping me at home in quarantine and recognizing that it is not always easy. If you are finding that you have to try hard and dig deep to use some of these strategies, you are not alone — that is what takes right now. But you can do it, and you can get through it.

JP: I have heard that being in the arctic is not unlike being on another planet. How did that experience help you prepare for being in space, and potentially places such as the moon or even mars?

CK: I do think it is similar in a lot of ways. One, because of the landscape. It’s completely barren, very stark, and it is inhospitable. It gives us this environment to live where we have to remember that we are vulnerable, and we have to find ways to remain productive and not be preoccupied with that notion when doing our work. Just like on the space station, you can feel quite at home, wearing your polo shirt and Velcro pants, going about your day, and not recognizing that right outside that shell that you are in is the vacuum of space, and at any second, things could take a turn for the worse. 

In Antarctica and some of the Arctic areas that were very isolated, should you have a medical emergency, it can often be harder to evacuate or work on a person in those situations than even working on the ISS. At the ISS, you can undock and get back to earth in a matter of hours. At the south pole, weather conditions could prevent you from getting a medevac for weeks. In both situations, you have to develop strategies not to be preoccupied with the environmental concerns but still be vigilant to respond to them should something happen. That was something I took away from that experience — ways to not think about that too much, and to rely on your training should those situations arise. And then, of course, all the other things that living in isolation gives us.

The one thing that I found in that realm is something called sensory underload. And this is what your mind goes through when you see all the same people and faces, you keep staring at the same walls, you’ve tasted all the same food, and you’ve smelled all the same smells for so long. Your brain hasn’t been able to process something new for so long that it affects how we think and how we go about the world. In these situations, we might have to foster new sensory inputs and new situations and new things to process. NASA is looking into a lot of those things like reality augmentation for long-duration spaceflight, but in situations like the Arctic and Antarctic, even bringing in a care package, just to have new things in your environment can be so important when you are experiencing sensory underload. 

JP: The younger people reading this interview might be interested in becoming an astronaut someday. What should the current, or next generation — the Gen Y’s, the Gen Z’s — be thinking about doing today — to pursue a career as an astronaut? 

CK: I cannot wait to see what that generation does. Already they have been so impressive and so creative. The advice I have is to follow your passions. But in particular, what that means is to take that path that allows you to be your best self and contribute in the maximum possible way. The story I like to tell is that when I was in high school, I was a true space geek, and I went to space camp, and there we learned all the things you need to do to become an astronaut. 

There was a class on it, and they had a whiteboard with a checklist of what you should do — so everyone around me who wanted to be an astronaut was just scribbling this stuff down. And at that moment, I realized if I were ever to become an astronaut, I would want it to be because I pursued the things that I was naturally drawn to and passionate about, and hopefully, naturally good at. If one day that shaped me into someone who could contribute as an astronaut, only then would I become truly worthy of becoming one. So I waited until I felt I could make that case to apply to become an astronaut, and it led me to this role of focusing on the idea of contributing. 

The good news about following a path like that is even if you don’t end up achieving the exact dream that you may have. Whether that’s to become an astronaut or something else that may be very difficult to achieve, you’ve done what you’ve loved along the way, which guarantees that you will be successful and fulfilled. And that is the goal. Eyes on the prize, but make sure you are following the path that is right for you.

JP: Some feel that human-crewed spaceflight is an expensive endeavor when we have extremely pressing issues to deal with on Earth — climate change, the population explosion, feeding the planet, and recent problems such as the Coronavirus. What can we learn from space exploration that could potentially solve these issues at home on terra firma?

CK: It is a huge concern, in terms of resource allocation, so many things that are important that warrant our attention. And I think that your question, what can we learn from space exploration, is so important and there are countless examples — the Coronavirus, to start. NASA is studying how the immune system functions at a fundamental level for humans by the changes that occur in a microgravity environment. We’re studying climate change — numerous explorations, on the space station and other areas of NASA. Exploration is enabled by discovery and by technological advances. Where those take us, we can’t even determine. The camera in your smartphone or in your tablet was enabled by NASA technology. 

There are countless practical examples, but to me, the real answer is bigger than all of that — and what it can show us is what can be accomplished when we work together on a common goal and a shared purpose. There are examples of us overcoming things on a global scale in the past that seemed insurmountable at the beginning, such as battling the hole in the ozone layer. When that first came out, we had to study it, we had to come up with mitigation strategies, and they had to be adopted by the world, even when people were pointing out the potential economic drawbacks to dealing with it. 

But the problem was more significant than that, and we all got together, and we solved it. So looking towards what we can do when we work together with a unified purpose is really what NASA does for us on an even bigger scale. We talk about how exploration and looking into space is uplifting — I consider it to be uplifting for all across the spectrum. There are so many ways we can uplift people from all backgrounds. We can provide them with the tools to have what they need to reach their full potential, but then what? What is across that goal line? It is bigger things that inspire them to be their best, and that is how NASA can be uplifting for everyone, in achieving the big goals.

JP: So recently, NASA resumed human-crewed spaceflight using a commercial launch vehicle, the SpaceX Crew Dragon capsule. Do you feel that the commercialization of space is inevitable? Is the heavy lifting of the future going to come from commercial platforms such as SpaceX, Boeing, et cetera for the foreseeable future? And is the astronaut program always going to be a government-sponsored entity, or will we see private astronauts? And what other opportunities do you see in the private sector for startups to partner with NASA?

CK: For sure. I think that we are already seeing that the commercial aspect is playing out now, and it’s entirely a positive thing for me. You asked about private astronauts — there are already private astronauts training with a company, doing it at NASA through a partnership, and having a contract to fly on a SpaceX vehicle to the ISS through some new ways we are commercializing Low Earth Orbit. That’s already happening, and everyone I know is excited about it. I think anyone with curiosity, anyone who can carry dreams and hopes into space, and bring something back to Earth is welcome in the program.

I think that the model that NASA has been using for the last ten years to bring in commercial entities is ideal. We are looking to the next deeper set, going back to the moon, and then applying those technologies to go on to Mars. At the same time, we sort of foster and turn over the things we’ve already explored, such as Low Earth Orbit and bringing astronauts to and from the space station to foster a commercial space industry. To me, that strategy is perfect; a government organization can conduct that work that may not have that private motivation or the commercial incentives. Once it is incubated, then it is passed on, and that is when you see the commercial startups coming. 

The future is bright for commercialization in space, and I think that bringing in innovation that can happen when you pass off something to an entirely new set of designers is one of the most exciting aspects of this. One of the neat examples of that is SpaceX and their spacesuits — I heard that they did not consult with who we at NASA use as our spacesuit experts that have worked with us in the past. I think that is probably because they did not want to be biased by legacy hardware and legacy ways of doing things. They wanted to re-invent it from the start, to ensure that every aspect was re-thought and reengineered and done in a potentially new way. When you’ve been owners of that legacy hardware that’s difficult to do — especially in such a risky field and in a place where something tried and true has such a great magnetic draw. So, to break through the innovation barrier, bringing commercial partners onboard is so exciting and important.

JP: Let’s get to the Linux Foundation’s core audience here, developers. You were an engineer, and you used to program. What do you think the role of developers is in space exploration?

CK: Well, it cannot be understated. When I was in the space industry before becoming an astronaut, I was a developer of instrumentation for space probes. I built the little science gadgets and was typically involved in the sensor front-end, the intersection of the detectors’ physics and the electronics of the readouts. But that necessitated a lot of the testing, and it was fundamentals testing. Most of the programming I did was building up the GUIs for all the tests that we needed to run, and the I/O to talk to the instruments, to learn what it was telling us, to make sure it could function in a wide variety of environmental states and different inputs that it was expected to see, once it eventually got into space. 

That was just my aspect — and then there is all the processing of the data. If you think about astronomy, there is so much we know about the universe through different telescopes, space-based and ground-based, and one of the things we do is anticoincidence detection. We had to come up with algorithms that detect only the kind of particles or on wavelengths that we want to identify, and not the ones that deposit energy in different ways that we are trying to study. Even just the algorithms to suss out that tiny aspect of what those kinds of X-Ray detectors on those telescopes do, is entirely software-intensive. Some of it is actual firmware because it has to happen so quickly, in billionths of a second, but basically, the software enables the entire industry, whether it is the adaptive optics that allow us to see clearly, or the post-processing, or even just the algorithms we use to refine and do the R&D, it’s everywhere, and it is ubiquitous. The first GUIs I ever wrote were on a Linux system using basic open source stuff to talk to our instruments. As far as I know, there is no single person who can walk into any job at NASA and have no programming experience. It’s everywhere.

JP: Speaking of programming and debugging, I saw a video of you floating around in the server room on the ISS, which to me looked like a bunch of ThinkPad laptops taped to a bulkhead and sort of jury-rigged networked there. What’s it like to debug technical problems in space with computer systems and dealing with various technical challenges? It’s not like you can call Geek Squad, and they are going to show up in a van and fix your server if something breaks. What do you do up there?

CK: That is exactly right, although there is only one thing that is inaccurate about that statement — those Lenovos are Velcroed to the wall, not taped (laugh). We rely on the experts on the ground as astronauts. Interestingly, for the most part, just like an IT department, just like at any enterprise, the experts, for the most part, can remotely login to our computers, even though they are in space. That still happens. But if one of the servers is completely dead, they call on us to intercede, we’ve had to re-image drives, and do hardware swaps.

JP: OK, a serious question, a religious matter. Are you a Mac or a PC user, an iOS or an Android user, and are you a cat or a dog person? These are crucial questions; you could lose your whole audience if you answer this the wrong way, so be careful.

CK: I am terrified right now. So the first one I get to sidestep because I have both a Mac and a PC. I am fluent in both. The second — Android all the way. And as the third, I thought I was a cat person, but since I got my dog Sadie, I am a dog person. We don’t know what breed she is since she is from the Humane Society and is a rescue, so we call her an LBD — a Little Brown Dog. She is a little sweetheart, and I missed her quite a bit on my mission.

JP: Outside of being an astronaut, I have heard you have already started to poke around GitHub, for your nieces and nephews. Are there any particular projects you are interested in? Any programming languages or tools you might want to learn or explore?

CK: Definitely. Well, I want to learn Python because it is really popular, and it would help out with my Raspberry Pi projects. The app that I am writing right now in Android Studio, which I consulted on with my 4-year-old niece, who wanted a journal app. I’m not telling anyone my username on GitHub because I am too embarrassed about what a terrible coder I am. I wouldn’t want anyone to see it, but it will be uploaded there. Her brother wants the app too, so that necessitated the version control. It’s just for fun, for now, having missed that technical aspect from my last job. I do have some development boards, and I do have various home projects and stuff like that.

JP: In your keynote, you mentioned that the crew’s favorite activity in space is pizza night. What is your favorite food or cuisine, and is there anything that you wished you could eat in space that you can’t?

CK: My favorite food or cuisine on Earth is something you can’t have in space, sushi, or poke, all the fresh seafood type things that I got introduced to from living in American Samoa and visiting Hawaii and places like that, I missed those. All the food we have in space is rehydrated, or from MREs, so it doesn’t have a lot of texture, it has to have the consistency of like mac and cheese or something like that. So what I really missed is chips — especially chips and salsa. Because anything crunchy is going to crumble up is going to go everywhere. So we don’t have anything crunchy. Unfortunately, I have eaten enough to have made up for without chips and salsa since I was back. 

JP: Thank you very much, Christina, for your time and insights! Great interview.

Watch Christina’s full keynote here: