Category

Case Study

OpenJS In Action: ESRI powering COVID-19 response with open source

By Blog, Case Study, Dojo, ESLint, Grunt, OpenJS In Action

The OpenJS In Action series features companies that use OpenJS Foundation projects to develop efficient, effective web technologies. 

Esri, a geographic information systems company, is using predictive models and interactive maps with JavaScript technologies to help the world better understand and respond to the recent COVID-19 pandemic. Recently, they have built tools that visualize how social distancing precautions can help reduce cases and the burden on healthcare systems. They have also helped institutions like Johns Hopkins create their own informational maps by providing a template app and resources to extend functionality. 

Esri uses OpenJS Foundation projects such as Dojo Toolkit, Grunt, ESLint and Intern to increase developer productivity and deliver high-quality applications that help the world fight back against the pandemic. 

Esri’s contributions to the COVID response effort and an explanation of how they created the underlying technologies are available at this video: 

https://youtu.be/KLnht-1F3Ao

Robin Ginn, Executive Director of the OpenJS Foundation, spoke with Kristian Ekenes, Product Engineer at Esri, to highlight the work his company has been doing. Esri normally creates mapping software, databases and tools to help businesses manage spatial data. However, Ekenes started work on a tool called Capacity Analysis when the COVID-19 pandemic began to spread. 

Capacity Analysis is a configurable app that allows organizations to display and interact with results from two scenarios predicting a hospital’s ability to meet the demand of COVID-19 patients given configurable parameters, such as the percentage of people following social distancing guidelines. Health experts can create two hypothetical scenarios using one of two models: Penn Medicine’s COVID-19 Hospital Impact Model for Epidemics (CHIME) or the CDC’s COVID-19Surge model. Then they can deploy their own version of Capacity Analysis to view how demand for hospital beds, ICU beds, and ventilators varies by time and geography in each scenario. This tool is used by governments worldwide to better predict how the pandemic will challenge specific areas.

During the interview, Ekenes spoke on the challenges that come with taking on ambitious projects like Capacity Analysis. Esri has both a large developer team and a diverse ecosystem of applications. This makes it difficult to maintain consistency in the API and SDKs deployed across desktop and mobile platforms. To overcome these challenges, Esri utilizes several OpenJS Foundation projects including Dojo Toolkit, Grunt, ESLint and Intern

Ekenes explained that Grunt and ESLint increase developer productivity by providing real-time feedback when writing code. The linter also standardizes work across developers by indicating when incorrect practices are being used. This reduces the number of pull requests between collaborators and saves time for the entire team. Intern allows developers to write testing modules and create high-quality apps by catching bugs early. In short, Esri helps ensure consistent and thoroughly tested applications by incorporating OpenJS Foundation projects into their work. 

From streaming to studio: The evolution of Node.js at Netflix

By Blog, Case Study, Node.js, Project Update

As platforms grow, so do their needs. However, the core infrastructure is often not designed to handle these new challenges as it was optimized for a relatively simple task. Netflix, a member of the OpenJS Foundation, had to overcome this challenge as it evolved from a massive web streaming service to a content production platform. Guilherme Hermeto, Senior Platform Engineer at Netflix, spearheaded efforts to restructure the Netflix Node.js infrastructure to handle new functions while preserving the stability of the application. In his talk below, he walks through his work and provides resources and tips for developers encountering similar problems.

Check out the full presentation 

Netflix initially used Node.js to enable high volume web streaming to over 182 million subscribers. Their three goals with this early infrastructure was to provide observability (metrics), debuggability (diagnostic tools) and availability (service registration). The result was the NodeQuark infrastructure. An application gateway authenticates and routes requests to the NodeQuark service, which then communicates with APIs and formats responses that are sent back to the client. With NodeQuark, Netflix also created a managed experience — teams could create custom API experiences for specific devices. This allows the Netflix app to run seamlessly on different devices. 

Beyond streaming

However, Netflix wanted to move beyond web streaming and into content production. This posed several challenges to the NodeQuark infrastructure and the development team. Web streaming requires relatively few applications, but serves a huge user base. On the other hand, a content production platform houses a large number of applications that serve a limited userbase. Furthermore, a content production app must have multiple levels of security for employees, partners and users. An additional issue is that development for content production is ideally fast paced while platform releases are slow, iterative processes intended to ensure application stability. Grouping these two processes together seems difficult, but the alternative is to spend unnecessary time and effort building a completely separate infrastructure. 

Hermeto decided that in order to solve Netflix’s problems, he would need to use self-contained modules. In other words, plugins! By transitioning to plugins, the Netflix team was able to separate the infrastructure’s functions while still retaining the ability to reuse code shared between web streaming and content production. Hermeto then took plugin architecture to the next step by creating application profiles. The application profile is simply a list of plugins required by an application. The profile reads in these specific plugins and then exports a loaded array. Therefore, the risk of a plugin built for content production breaking the streaming application was reduced. Additionally, by sectioning code out into smaller pieces, the Netflix team was able to remove moving parts from the core system, improving stability. 

Looking ahead

In the future, Hermeto wants to allow teams to create specific application profiles that they can give to customers. Additionally, Netflix may be able to switch from application versions to application profiles as the code breaks into smaller and smaller pieces. 

To finish his talk, Hermeto gave his personal recommendations for open source projects that are useful for observability and debuggability. Essentially, a starting point for building out your own production-level application!  

Personal recommendations for open source projects 

Metrics and alerts: 

Centralized Logging 

Distributed tracing 

Diagnostics 

Exception Management 

Expedia Group: Building better testing pipelines with opensource.

By Blog, Case Study, ESLint, OpenJS In Action

The OpenJS In Action series features companies that use OpenJS Foundation projects to help develop efficient, effective web technologies. 

Software developers at global travel company Expedia Group are using JavaScript, ESLint and robust testing pipelines to reduce inconsistency and duplication in their code. Switching from Java and JSP to Node.js has streamlined development and design systems. Beyond that, Expedia developers are looking into creating a library of reusable design and data components for use across their many brands and pages. 

Robin Ginn, executive director of the OpenJS Foundation, interviewed Tiffany Le-Nguyen, Software Development Engineer at Expedia Group.

Expedia is an example of how adoption of new technologies and techniques can improve customer and developer experiences. 

A video featuring Expedia is available here: https://youtu.be/FDF6SgtEvYY

Robin Ginn, executive director of the OpenJS Foundation, interviewed Tiffany Le-Nguyen, Software Development Engineer at Expedia Group. Le-Nguyen explained how accessibility and performance concerns led developers to modernize Expedia’s infrastructure. One of the choices they made was to integrate ESLint into their testing pipeline to catch bugs and format input before content was pushed live. ESLint also proved to be a huge time-saver — it enforced development standards and warned developers when incorrect practices were being used. 

ESLint was especially useful for guiding new developers through JavaScript, Node.js and TypeScript. Expedia made the bold move to switch most of their applications from Java and JSP to Node.js and TypeScript. Le-Nguyen is now able to catch most errors and quickly push out new features by combining Node.js with Express and a robust testing pipeline. 

However, Expedia is used globally to book properties and dates for trips. Users reserve properties with different currencies across different time zones. This makes it difficult to track when a property was reserved and whether the correct amount was paid. Luckily, Expedia was able to utilize Globalize, an OpenJS project that provides number formatting and parsing, date and time formatting and currency formatting for languages across the world. Le-Nguyen was able to simplify currency tracking across continents by integrating Globalize into the project. 

To end the talk, Le-Nguyen suggested that web developers should take another look into UI testing. Modern testing tools have simplified the previously clunky process. Proper implementation of a good testing pipeline improves the developer experience and leads to a better end product for the user. 

How Node.js saved the U.S. Government $100K

By Blog, Case Study, Node.js, OpenJS World

The following blog is based on a talk given at the OpenJS Foundation’s annual OpenJS World event and covers solutions created with Node.js.

When someone proposes a complicated, expensive solution ask yourself: can it be done cheaper, better and/or faster? Last year, an external vendor wanted to charge $103,000 to create an interactive form and store the responses. Ryan Hillard, Systems Developer at the U.S. Small Business Administration, was brought in to create a less expensive, low-maintenance alternative to the vendor’s proposal. Hillard was able to create a solution using ~320 lines of code and $3000. In the talk below Hillard describes what the difficulties were and how his Node.js solution fixed the problem.

Last year, Hillard started work on a government’s case management system that received and processed feedback from external and internal users. Unfortunately, a recent upgrade and rigorous security measures prevented external users from leaving feedback. Hillard needed to create a secure interactive form and then store the data. However, the solution also needed to be cheap, easy to maintain and stable. 

Hillard decided to use three common services: Amazon Simple Storage Service (S3), Amazon Web Services (AWS) Lambda and Node.js. Together, these pieces provided a simple and versatile way to capture and then store response data. Maintenance is low because the servers are maintained by Amazon. Additionally, future developers can easily alter and improve the process as all three services/languages are commonly used. 

To end his talk, Hillard discussed the design and workflow processes that led him to his solution. He compares JavaScript to a giant toolkit with hundreds of libraries and dependencies — a tool for every purpose. However, this variety can be counterproductive as the complexity – and thus the management time – increases.

Developers should ask themselves how they can solve their problems without introducing anything new. In other words, size does matter — the smallest, simplest toolkit is the best!

Linux Foundation interview with NASA Astronaut Christina Koch

By Announcement, Blog, Case Study, Event, OpenJS World

Jason Perlow, Editorial Director at the Linux Foundation, had a chance to speak with NASA astronaut Christina Koch. This year, she completed a record-breaking 328 days at the International Space Station for the longest single spaceflight by a woman and participated in the first all-female spacewalk with fellow NASA astronaut Jessica Meir. Christina gave a keynote at the OpenJS Foundation’s flagship event, OpenJS World on June 24, 2020, where she shared more on how open source JavaScript and web technologies are being used in space. This post can also be found on the Linux Foundation Foundation blog. 

JP: You spent nearly a year in space on the ISS, and you dealt with isolation from your friends and family, having spent time only with your crewmates. It’s been about three months for most of us isolating at home because of the COVID-19 pandemic. We haven’t been trained to deal with these types of things — regular folks like us don’t usually live in arctic habitats or space stations. What is your advice for people dealing with these quarantine-type situations for such long periods? 

CK: Well, I can sympathize, and it can be a difficult challenge even for astronauts, and it can be hard to work through and come up with strategies for. For me, the #1 thing was making sure I was in charge of the framework I used to view the situation. I flipped it around and instead about thinking about all the things I was missing out on and the things that I didn’t have available to me, I tried to focus on the unique things that I did have, that I would never have again, that I would miss one day. 

So every time I heard that thought in my head, that “I just wish I could…” whatever, I would immediately replace it with “this one thing I am experiencing I will never have again, and it is unique”. 

So the advice I have offered since the very beginning of the stay at home situation has been finding that thing about our current situation that you truly love that you’ll know you will miss. Recognize what you know is unique about this era, whether it is big, or small — whether it is philosophical or just a little part of your day — and just continually focus on that. The biggest challenge is we don’t know when this is going to be over, so we can quickly get into a mindset where we are continually replaying into our heads “when is this going to be over? I just want to <blank>” and we can get ourselves into a hole. If you are in charge of the narrative, and then flip it, that can really help.

I have to say that we are all experiencing quarantine fatigue. Even when it may have been fun and unique in the beginning — obviously, nobody wanted to be here, and nobody hopes we are in this situation going forward, but there are ways we can deal with it and find the silver lining. Right now, the challenge is staying vigilant, some of us have discovered those strategies that work, but some of us are just tired of working at them, continually having to be our best selves and bringing it every day. 

So you need to recommit to those strategies, but sometimes you need to switch it up — halfway through my mission, I changed every bit of external media that was available to me. We have folks that will uplink our favorite TV shows, podcasts, books and magazines, and other entertainment sources. I got rid of everything I had been watching and listening to and started fresh with a new palette. It kind of rejuvenated me and reminded me that there were new things I could feast my mind on and unique sensory experiences I could have. Maybe that is something you can do to keep it fresh and recommit to those strategies. 

JP: I am stuck at home here, in Florida, with my wife. When you were up in the ISS, you were alone, with just a couple of your crewmates. Were you always professional and never fought with each other, or did you occasionally have spats about little things?

CK: Oh my goodness, there were always little spats that could affect our productivity if we allowed it. I can relate on so many levels. Being on the ISS for eleven months, with a lot of the same people in a row, not only working side-by-side but also socializing on the weekends, and during meals at the end of the day. I can relate because my husband and I were apart for almost two years if you take into account my training in Russia, and then my flight. Of course, now, we are together 24 hours a day, and we are both fortunate enough that we can work from home. 

It is a tough situation, but at NASA, we all draw from a skill set called Expeditionary Behavior. It’s a fancy phrase to help us identify and avoid conflict situations and get out of those situations if we find ourselves in them. Those are things like communication — which I know we should all be doing our best at, as well as group living. But there are other things NASA brought up in our training are self-care, team care, leadership, and particularly, followership. Often, we talk about leadership as an essential quality, but we forget that followership and supporting a leader are also very important. That is important in any relationship, whether it is a family, a marriage, helping the other people on your team, even if it is an idea that they are carrying through that is for the betterment of the whole community or something like that. The self-care and team care are really about recognizing when people on your team or in your household may need support, knowing when you need that support, and being OK with asking for it and being clear about what needs you may have.

A common thread among all those lines is supporting each other. One way, in my opinion, the easiest way to get yourself out of feeling sorry for whatever situation you might be in is to think about the situation everyone else is in and what they might need. Asking someone else, “Hey, how are you doing today, what can I do for you?” is another way to switch that focus. It helped me on my mission, and it is helping me at home in quarantine and recognizing that it is not always easy. If you are finding that you have to try hard and dig deep to use some of these strategies, you are not alone — that is what takes right now. But you can do it, and you can get through it.

JP: I have heard that being in the arctic is not unlike being on another planet. How did that experience help you prepare for being in space, and potentially places such as the moon or even mars?

CK: I do think it is similar in a lot of ways. One, because of the landscape. It’s completely barren, very stark, and it is inhospitable. It gives us this environment to live where we have to remember that we are vulnerable, and we have to find ways to remain productive and not be preoccupied with that notion when doing our work. Just like on the space station, you can feel quite at home, wearing your polo shirt and Velcro pants, going about your day, and not recognizing that right outside that shell that you are in is the vacuum of space, and at any second, things could take a turn for the worse. 

In Antarctica and some of the Arctic areas that were very isolated, should you have a medical emergency, it can often be harder to evacuate or work on a person in those situations than even working on the ISS. At the ISS, you can undock and get back to earth in a matter of hours. At the south pole, weather conditions could prevent you from getting a medevac for weeks. In both situations, you have to develop strategies not to be preoccupied with the environmental concerns but still be vigilant to respond to them should something happen. That was something I took away from that experience — ways to not think about that too much, and to rely on your training should those situations arise. And then, of course, all the other things that living in isolation gives us.

The one thing that I found in that realm is something called sensory underload. And this is what your mind goes through when you see all the same people and faces, you keep staring at the same walls, you’ve tasted all the same food, and you’ve smelled all the same smells for so long. Your brain hasn’t been able to process something new for so long that it affects how we think and how we go about the world. In these situations, we might have to foster new sensory inputs and new situations and new things to process. NASA is looking into a lot of those things like reality augmentation for long-duration spaceflight, but in situations like the Arctic and Antarctic, even bringing in a care package, just to have new things in your environment can be so important when you are experiencing sensory underload. 

JP: The younger people reading this interview might be interested in becoming an astronaut someday. What should the current, or next generation — the Gen Y’s, the Gen Z’s — be thinking about doing today — to pursue a career as an astronaut? 

CK: I cannot wait to see what that generation does. Already they have been so impressive and so creative. The advice I have is to follow your passions. But in particular, what that means is to take that path that allows you to be your best self and contribute in the maximum possible way. The story I like to tell is that when I was in high school, I was a true space geek, and I went to space camp, and there we learned all the things you need to do to become an astronaut. 

There was a class on it, and they had a whiteboard with a checklist of what you should do — so everyone around me who wanted to be an astronaut was just scribbling this stuff down. And at that moment, I realized if I were ever to become an astronaut, I would want it to be because I pursued the things that I was naturally drawn to and passionate about, and hopefully, naturally good at. If one day that shaped me into someone who could contribute as an astronaut, only then would I become truly worthy of becoming one. So I waited until I felt I could make that case to apply to become an astronaut, and it led me to this role of focusing on the idea of contributing. 

The good news about following a path like that is even if you don’t end up achieving the exact dream that you may have. Whether that’s to become an astronaut or something else that may be very difficult to achieve, you’ve done what you’ve loved along the way, which guarantees that you will be successful and fulfilled. And that is the goal. Eyes on the prize, but make sure you are following the path that is right for you.

JP: Some feel that human-crewed spaceflight is an expensive endeavor when we have extremely pressing issues to deal with on Earth — climate change, the population explosion, feeding the planet, and recent problems such as the Coronavirus. What can we learn from space exploration that could potentially solve these issues at home on terra firma?

CK: It is a huge concern, in terms of resource allocation, so many things that are important that warrant our attention. And I think that your question, what can we learn from space exploration, is so important and there are countless examples — the Coronavirus, to start. NASA is studying how the immune system functions at a fundamental level for humans by the changes that occur in a microgravity environment. We’re studying climate change — numerous explorations, on the space station and other areas of NASA. Exploration is enabled by discovery and by technological advances. Where those take us, we can’t even determine. The camera in your smartphone or in your tablet was enabled by NASA technology. 

There are countless practical examples, but to me, the real answer is bigger than all of that — and what it can show us is what can be accomplished when we work together on a common goal and a shared purpose. There are examples of us overcoming things on a global scale in the past that seemed insurmountable at the beginning, such as battling the hole in the ozone layer. When that first came out, we had to study it, we had to come up with mitigation strategies, and they had to be adopted by the world, even when people were pointing out the potential economic drawbacks to dealing with it. 

But the problem was more significant than that, and we all got together, and we solved it. So looking towards what we can do when we work together with a unified purpose is really what NASA does for us on an even bigger scale. We talk about how exploration and looking into space is uplifting — I consider it to be uplifting for all across the spectrum. There are so many ways we can uplift people from all backgrounds. We can provide them with the tools to have what they need to reach their full potential, but then what? What is across that goal line? It is bigger things that inspire them to be their best, and that is how NASA can be uplifting for everyone, in achieving the big goals.

JP: So recently, NASA resumed human-crewed spaceflight using a commercial launch vehicle, the SpaceX Crew Dragon capsule. Do you feel that the commercialization of space is inevitable? Is the heavy lifting of the future going to come from commercial platforms such as SpaceX, Boeing, et cetera for the foreseeable future? And is the astronaut program always going to be a government-sponsored entity, or will we see private astronauts? And what other opportunities do you see in the private sector for startups to partner with NASA?

CK: For sure. I think that we are already seeing that the commercial aspect is playing out now, and it’s entirely a positive thing for me. You asked about private astronauts — there are already private astronauts training with a company, doing it at NASA through a partnership, and having a contract to fly on a SpaceX vehicle to the ISS through some new ways we are commercializing Low Earth Orbit. That’s already happening, and everyone I know is excited about it. I think anyone with curiosity, anyone who can carry dreams and hopes into space, and bring something back to Earth is welcome in the program.

I think that the model that NASA has been using for the last ten years to bring in commercial entities is ideal. We are looking to the next deeper set, going back to the moon, and then applying those technologies to go on to Mars. At the same time, we sort of foster and turn over the things we’ve already explored, such as Low Earth Orbit and bringing astronauts to and from the space station to foster a commercial space industry. To me, that strategy is perfect; a government organization can conduct that work that may not have that private motivation or the commercial incentives. Once it is incubated, then it is passed on, and that is when you see the commercial startups coming. 

The future is bright for commercialization in space, and I think that bringing in innovation that can happen when you pass off something to an entirely new set of designers is one of the most exciting aspects of this. One of the neat examples of that is SpaceX and their spacesuits — I heard that they did not consult with who we at NASA use as our spacesuit experts that have worked with us in the past. I think that is probably because they did not want to be biased by legacy hardware and legacy ways of doing things. They wanted to re-invent it from the start, to ensure that every aspect was re-thought and reengineered and done in a potentially new way. When you’ve been owners of that legacy hardware that’s difficult to do — especially in such a risky field and in a place where something tried and true has such a great magnetic draw. So, to break through the innovation barrier, bringing commercial partners onboard is so exciting and important.

JP: Let’s get to the Linux Foundation’s core audience here, developers. You were an engineer, and you used to program. What do you think the role of developers is in space exploration?

CK: Well, it cannot be understated. When I was in the space industry before becoming an astronaut, I was a developer of instrumentation for space probes. I built the little science gadgets and was typically involved in the sensor front-end, the intersection of the detectors’ physics and the electronics of the readouts. But that necessitated a lot of the testing, and it was fundamentals testing. Most of the programming I did was building up the GUIs for all the tests that we needed to run, and the I/O to talk to the instruments, to learn what it was telling us, to make sure it could function in a wide variety of environmental states and different inputs that it was expected to see, once it eventually got into space. 

That was just my aspect — and then there is all the processing of the data. If you think about astronomy, there is so much we know about the universe through different telescopes, space-based and ground-based, and one of the things we do is anticoincidence detection. We had to come up with algorithms that detect only the kind of particles or on wavelengths that we want to identify, and not the ones that deposit energy in different ways that we are trying to study. Even just the algorithms to suss out that tiny aspect of what those kinds of X-Ray detectors on those telescopes do, is entirely software-intensive. Some of it is actual firmware because it has to happen so quickly, in billionths of a second, but basically, the software enables the entire industry, whether it is the adaptive optics that allow us to see clearly, or the post-processing, or even just the algorithms we use to refine and do the R&D, it’s everywhere, and it is ubiquitous. The first GUIs I ever wrote were on a Linux system using basic open source stuff to talk to our instruments. As far as I know, there is no single person who can walk into any job at NASA and have no programming experience. It’s everywhere.

JP: Speaking of programming and debugging, I saw a video of you floating around in the server room on the ISS, which to me looked like a bunch of ThinkPad laptops taped to a bulkhead and sort of jury-rigged networked there. What’s it like to debug technical problems in space with computer systems and dealing with various technical challenges? It’s not like you can call Geek Squad, and they are going to show up in a van and fix your server if something breaks. What do you do up there?

CK: That is exactly right, although there is only one thing that is inaccurate about that statement — those Lenovos are Velcroed to the wall, not taped (laugh). We rely on the experts on the ground as astronauts. Interestingly, for the most part, just like an IT department, just like at any enterprise, the experts, for the most part, can remotely login to our computers, even though they are in space. That still happens. But if one of the servers is completely dead, they call on us to intercede, we’ve had to re-image drives, and do hardware swaps.

JP: OK, a serious question, a religious matter. Are you a Mac or a PC user, an iOS or an Android user, and are you a cat or a dog person? These are crucial questions; you could lose your whole audience if you answer this the wrong way, so be careful.

CK: I am terrified right now. So the first one I get to sidestep because I have both a Mac and a PC. I am fluent in both. The second — Android all the way. And as the third, I thought I was a cat person, but since I got my dog Sadie, I am a dog person. We don’t know what breed she is since she is from the Humane Society and is a rescue, so we call her an LBD — a Little Brown Dog. She is a little sweetheart, and I missed her quite a bit on my mission.

JP: Outside of being an astronaut, I have heard you have already started to poke around GitHub, for your nieces and nephews. Are there any particular projects you are interested in? Any programming languages or tools you might want to learn or explore?

CK: Definitely. Well, I want to learn Python because it is really popular, and it would help out with my Raspberry Pi projects. The app that I am writing right now in Android Studio, which I consulted on with my 4-year-old niece, who wanted a journal app. I’m not telling anyone my username on GitHub because I am too embarrassed about what a terrible coder I am. I wouldn’t want anyone to see it, but it will be uploaded there. Her brother wants the app too, so that necessitated the version control. It’s just for fun, for now, having missed that technical aspect from my last job. I do have some development boards, and I do have various home projects and stuff like that.

JP: In your keynote, you mentioned that the crew’s favorite activity in space is pizza night. What is your favorite food or cuisine, and is there anything that you wished you could eat in space that you can’t?

CK: My favorite food or cuisine on Earth is something you can’t have in space, sushi, or poke, all the fresh seafood type things that I got introduced to from living in American Samoa and visiting Hawaii and places like that, I missed those. All the food we have in space is rehydrated, or from MREs, so it doesn’t have a lot of texture, it has to have the consistency of like mac and cheese or something like that. So what I really missed is chips — especially chips and salsa. Because anything crunchy is going to crumble up is going to go everywhere. So we don’t have anything crunchy. Unfortunately, I have eaten enough to have made up for without chips and salsa since I was back. 

JP: Thank you very much, Christina, for your time and insights! Great interview.

Watch Christina’s full keynote here:

How The Weather Company uses Node.js in production

By Announcement, Blog, Case Study, ESLint, member blog, Node.js

Using Node.js improved site speed, performance, and scalability

This piece was written by By Noel Madali and originally appeared on the IBM Developer Blog. IBM is a member of the OpenJS Foundation.

The Weather Company uses Node.js to power their weather.com website, a multinational weather information and news website available in 230+ locales and localized in about 60 languages. As an industry leader in audience reach and accuracy, weather.com delivers weather data, forecasts, observations, historical data, news articles, and video.

Because weather.com offers a location-based service that is used throughout the world, its infrastructure must support consistent uptime, speed, and precise data delivery. Scaling the solution to billions of unique locations has created multiple technical challenges and opportunities for the technical team. In this blog post, we cover some of the unique challenges we had to overcome when building weather.com and discuss how we ended up using Node.js to power our internationalized weather application.

Drupal ‘n Angular (DNA): The early days

In 2015, we were a Drupal ‘n Angular (DNA) shop. We unofficially pioneered the industry by marrying Drupal and Angular together to build a modular, content-based website. We used Drupal as our CMS to control content and page configuration, and we used Angular to code front-end modules.

Front-end modules were small blocks of user interfaces that had data and some interactive elements. Content editors would move around the modules to visually create a page and use Drupal to create articles about weather and publish it on the website.

DNA was successful in rapidly expanding the website’s content and giving editors the flexibility to create page content on the fly.

As our usage of DNA grew, we faced many technical issues which ultimately boiled down to three main themes:

  • Poor performance
  • Instability
  • Slower time for developers to fix, enhance, and deploy code (also known as velocity)

Poor performance

Our site suffered from poor performance, with sluggish load times and unreliable availability. This, in turn, directly impacted our ad revenue since a faster page translated into faster ad viewability and more revenue generation.

To address some of our performance concerns, we conducted different front-end experiments.

  • We analyzed and evaluated modules to determine what we could change. For example, we evaluated getting rid of some modules that were not used all the time or we rewrote modules so they wouldn’t use giant JS libraries.
  • We evaluated our usage of a tag manager in reference to ad serving performance.
  • We added lazy-loaded modules to remove the module on first load in order to reduce the amount of JavaScript served to the client.

Instability

Because of the fragile deployment process of using Drupal with Angular, our site suffered from too much downtime. The deployment process was a matter of taking the name of a git branch and entering it into a UI to get released into different environments. There was no real build process, but only version control.

Ultimately, this led to many bad practices that impacted developers including lack of version control methodology, non-reproduceable builds, and the like.

Slower developer velocity

The majority of our developers had front-end experience, but very few them were knowledgeable about the inner workings of Drupal and PHP. As such, features and bug fixes related to PHP were not addressed as quickly due to knowledge gaps.

Large deployments contributed to slower velocity as well as stability issues, where small changes could break the entire site. Since a deployment was the entire codebase (Drupal, Drupal plugins/modules, front-end code, PHP scripts, etc), small code changes in a release could easily get overlooked and not be properly tested, breaking the deployment.

Overall, while we had a few quick wins with DNA, the constant regressions due to the setup forced us to consider alternative paths for our architecture.

Rethinking our architecture to include Node.js

Our first foray into using Node.js was a one-off project for creating a lite experience for weather.com that was completely server-side rendered and had minimal JavaScript. The audience had limited bandwidth and minimal device capabilities (for example, low-end smartphones using Facebook’s Free Basics).

Stakeholders were happy with the lite experience, commenting on the nearly instantaneous page loads. Analyzing this proof-of-concept was important in determining our next steps in our architectural overhaul.

Differing from DNA, the lite experience:

  • Rendered pages as server side only
  • Kept the front-end footprint under 30KB (virtually no JavaScript, little CSS, few images).

We used what we learned with the lite experience to help us and serve our website more performantly. This started with rethinking our DNA architecture.

Metrics to measure success

Before we worked on a new architecture, we had to show our business that a re-architecture was needed. The first thing we had to determine was what to measuring to show success.

We consulted with the Google Ad team to understand how exactly a high-performing webpage impacts business results. Google showed us proof that improving page speed increases ad viewability which translates to revenue.

With that in hand, each day we conducted tests across a set of pages to measure:

  • Speed index
  • Time to first interaction
  • Bytes transferred
  • Time to first ad call

We used a variety of tools to collect our metrics: WebPageTest, Lighthouse, sitespeed.io.

As we compiled a list of these metrics, we were able to judge whether certain experiments were beneficial or not. We used our analysis to determine what needed to change in our architecture to make the site more successful.

While we intended to completely rewrite our DNA website, we acknowledged that we needed to stair step our approach for experimenting with a newer architecture. Using the above methodology, we created a beta page and A/B tested it to verify its success.

From Shark Tank to a beta of our architecture

Recognizing the performance of our original Node.js proof of concept, we held a “Shark Tank” session where we presented and defended different ideal architectures. We evaluated whole frameworks or combinations of libraries like Angular, React, Redux, Ember, lodash, and more.

From this experiment, we collectively agreed to move from our monolithic architecture to a Node.js backend and newer React frontend. Our timeline for this migration was between nine months to a year.

Ultimately, we decided to use a pattern of small JS libraries and tools, similar to that of a UNIX operating system’s tool chain of commands. This pattern gives us the flexibility to swap out one component from the whole application instead of having to refactor large amounts of code to include a new feature.

On the backend, we needed to decouple page creation and page serving. We kept Drupal as a CMS and created a way for documents to be published out to more scalable systems which can be read by other services. We followed the pattern of Backends for Frontends (BFF), which allowed us to decouple our page frontends and allow for more autonomy of our backend downstream systems. We use the documents published by the CMS to deliver pages with content (instead of the traditional method of the CMS monolith serving the pages).

Even though Drupal and PHP can render server-side, our developers were more familiar with JavaScript, so using Node.js to implement isomorphic (universal) rendering of the site increased our development velocity.

flow

Developing with Node.js was an easy focus shift for our previous front-end oriented developers. Since the majority of our developers had a primarily JavaScript background, we stayed away from solutions that revolved around separate server-side languages for rendering.

Over time, we implemented and evolved our usage from our first project. After developing our first few pages, we decided to move away from ExpressJS to Koa to use newer JS standards like async/await. We started with pure React but switched to React-like Inferno.js.

After evaluating many different build systems (gulp, grunt, browserify, systemjs, etc), we decided to use Webpack to facilitate our build process. We saw Webpack’s growing maturity in a fast-paced ecosystem, as well as the pitfalls of its competitors (or lack thereof).

Webpack solved our core issue of DNA’s JS aggregation and minification. With a centralized build process, we could build JS code using a standardized module system, take advantage of the npm ecosystem, and minify the bundles (all during the build process and not during runtime).

Moving from client-side to server-side rendering of the application increased our speed index and got information to the user faster. React helped us in this aspect of universal rendering–being able to share code on both the frontend and backend was crucial to us for server-side rendering and code reuse.

Our first launch of our beta page was a Single Page App (SPA). Traditionally, we had to render each page and location as a hit back to the origin server. With the SPA, we were able to reduce our hits back to the origin server and improve the speed of rendering the next view thanks to universal rendering.

The following image shows how much faster the webpage response was after the SPA was introduced.

flow

As our solution included more Node.js, we were able to take advantage of a lot of the tooling associated with a Node.js ecosystem, including ESLint for linting, Jest for testing, and eventually Yarn for package management.

Linting and testing, as well as a more refined CI/CD pipeline, helped reduce bugs in production. This led to a more mature and stable platform as a whole, higher engineering velocity, and increased developer happiness.

Changing deployment strategies

Recognizing our problems with our DNA deployments, we knew we needed a better solution for delivering code to infrastructure. With our DNA setup, we used a managed system to deploy Drupal. For our new solution, we decided to take advantage of newer, container-based deployment and infrastructure methodologies.

By moving to Docker and Kubernetes, we achieved many best practices:

  • Separating out disparate pages into different services reduces failures
  • Building stateless services allows for less complexity, ease of testing, and scalability
  • Builds are repeatable (Docker images ensure the right artifacts are deployed and consistent) Our Kubernetes deployment allowed us to be truly distributed across four regions and seven clusters, with dozens of services scaled from 3 to 100+ replicas running on 400+ worker nodes, all on IBM Cloud.

Addressing a familiar set of performance issues

After running a successful beta experiment, we continued down the path of migrating pages into our new architecture. Over time, some familiar issues cropped up:

  • Pages became heavier
  • Build times were slower
  • Developer velocity decreased

We had to evolve our architecture to address these issues.

Beta v2: Creating a more performant page

Our second evolution of the architecture was a renaissance (rebirth). We had to go back to the basics and revisit our lite experience and see why it was successful. We analyzed our performance issues and came to a conclusion that the SPA was becoming a performance bottleneck. Although SPA benefits second page visits, we came to an understanding that majority of our users visit the website and leave once they get their information.

We designed and built the solution without a SPA, but kept React hydration in order to keep code reuse across the server and client-side. We paid more attention to the tooling during development by ensuring that code coverage (the percentage of JS client code used vs delivered) was more efficient.

Removing the SPA overall was key to reducing build times as well. Since a page was no longer stitched together from a singular entry point, we split the Webpack builds so that individual pages can have their own set of JS and assets.

We were able to reduce our page weight even more compared to the Beta site. Reducing page weight had an overall impact on page load times. The graph below shows how speed index decreased.

flow Note: Some data was lost in between January through October of 2019.

This architecture is now our foundation for any and all pages on weather.com.

Conclusion

weather.com was not transformed overnight and it took a lot of work to get where we are today. Adding Node.js to our ecosystem required some amount of trial and error.

We achieved success by understanding our issues, collecting metrics, and implementing and then reimplementing solutions. Our journey was an evolution. Not only was it a change to our back end, but we had to be smarter on the front end to achieve the best performance. Changing our deployment strategy and infrastructure allowed us to achieve multiple best practices, reduce downtimes, and improve overall system stability. JavaScript being used in both the back end and front end improved developer velocity.

As we continue to architect, evolve, and expand our solution, we are always looking for ways to improve. Check out weather.com on your desktop, or for our newer/more performant version, check out our mobile web version on your mobile device.

Tutorial: Use The Weather Company’s APIs to build a Node-RED weather dashboard

By Announcement, Blog, Case Study, Node-RED, tutorial

Build a hyper-local weather dashboard

This blog post was written by John Walicki, CTO for Edge/IoT Advocacy in the Developer Ecosystem Group of IBM Cognitive Applications Group and originally published on IBM Developer.

Learn how to build a weather dashboard using a personal weather station, Node-RED, Weather Underground, and The Weather Company APIs and the node-red-contrib-twc-weather nodes. This tutorial demonstrates how to display hyper-local weather information from a residential or farming weather station.

Learning objectives

In this tutorial, you will:

  • Learn the basics of personal weather stations (PWS)
  • Connect your PWS to Weather Underground (WU) and view PWS data on WU
  • Register for a The Weather Company (TWC) API key
  • Get started with the TWC API documentation
  • Learn about Node-RED (local and on IBM Cloud)
  • Explore the node-red-contrib-twc-weather Node-RED PWS node examples
  • Import / Deploy the Weather Dashboard example
  • Display PWS data in your Weather Dashboard
  • Build a Severe Weather Alert Map Node-RED Dashboard using TWC APIs
  • Build a Call for Code Water Sustainability solution

Prerequisites

npm install node-red-contrib-twc-weather node-red-dashboard node-red-node-ui-table node-red-contrib-web-worldmap
  • Send your PWS data to http://www.wunderground.com and retrieve your PWS API key
  • If you don’t have a PWS, you can still get a time-restricted TWC API key by joining Call for Code (which gives you access to most of the TWC PWS APIs)

Estimated time

Completing this tutorial should take about 30 minutes.

Steps

Introduction to personal weather stations

Wikipedia defines a personal weather station as a set of weather measuring instruments operated by a private individual, club, association, or business (where obtaining and distributing weather data is not a part of the entity’s business operation). Personal weather stations have become more advanced and can include many different sensors to measure weather conditions. These sensors can vary between models but most measure wind speed, wind direction, outdoor and indoor temperatures, outdoor and indoor humidity, barometric pressure, rainfall, and UV or solar radiation. Other available sensors can measure soil moisture, soil temperature, and leaf wetness.

The cost of a sufficiently-accurate personal weather station is less than $200 USD; they have become affordable for citizen scientists and weather buffs.

Connect your PWS to Weather Underground

Weather Underground PWS device

Many PWS brands offer the ability to connect and send weather data to cloud based services. Weather Underground, a part of The Weather Company, an IBM Business, encourages members to register their PWS and send data to http://www.wunderground.com

Weather Underground PWS data

Members can view their personal weather station data on Weather Underground 

Get a TWC API key and get started with the TWC API documentation

In addition to the wunderground.com dashboard, the PWS data is available through your API Key and a set of robust TWC Restful APIs. Copy your API Key and click on the View API Documentation button.

Weather Underground API Key

Register for a TWC API key

If you don’t have a Personal Weather Station, you can still register for a time-restricted TWC API key by joining Call for Code 2020. The API Key is valid from March 1 to October 15, 2020. This API key gives you access to most of the TWC Personal Weather Station APIs. You can complete this tutorial using this API key.

Learn about Node-RED

Node-RED is an open source programming tool for wiring together hardware devices, APIs, and online services in new and interesting ways. It provides a browser-based editor that makes it easy to wire together flows using the wide range of nodes in the palette that can be deployed to its runtime in a single-click.

Follow these instructions to install Node-RED locally or Create a Node-RED Starter application in the IBM Cloud

Install node-red-contrib-twc-weather nodes

Once Node-RED is installed, add the dependencies for this tutorial:

npm install node-red-contrib-twc-weather node-red-dashboard node-red-node-ui-table node-red-contrib-web-worldmap

Explore node-red-contrib-twc-weather Node-RED PWS node examples

The node-red-contrib-twc-weather GitHub repository includes an example flow that exercises each of the Node-RED PWS APIs. You can learn about the nodes and their configuration options by clicking on each node and reading its comprehensive node information tab. Import this PWS-Examples.json flow into your Node-RED Editor and Deploy the flow. Don’t forget to paste in your TWC PWS API key. If you want to explore personal weather station data but don’t have your own PWS, you can query the weather station data at the Ridgewood Fire Headquarters using the StationID KNJRIDGE9

PWS Example Flow

Import / Deploy the weather dashboard example

Now that the Node-RED node-red-contrib-twc-weather nodes are able to query weather data, let’s build an example Weather Node-RED Dashboard that displays Personal Weather Station current and historical data on a map, in a table, a gauge, and on a chart. The PWS API key includes access to the TWC 5 Day Forecast, which is displayed with weather-lite icons. This flow requires node-red-dashboard, node-red-node-ui-table, and node-red-contrib-web-worldmap. Import this PWS-Dashboard.json flow and Deploy the flow.

Display PWS data in your weather dashboard

Launch the Node-RED Dashboard and experiment with the current conditions, forecast, and map. The Call for Code TWC API key might not have access to private PWS historical data.

PWS Dashboard

Build a Severe Weather Alert Map Node-RED Dashboard using TWC APIs

In addition to the node-red-contrib-twc-weather Node-RED nodes, you can review the TWC Severe Weather API Documentation and use the http request node and your API Key to make calls directly.

The The Weather Company APIs includes an API to query all of the current Severe Weather Alerts issued by the National Weather Service. This next example plots those Severe Weather Alerts on a Node-RED Dashboard.

This example flow and Node-RED Dashboard might be useful as part of a Call for Code solution.

Display Severe Weather Alerts on a map

Severe Weather Alert Dashboard

Get the Code: Node-RED flow for Severe Weather Alerts

Summary

Build a Call for Code Water Sustainability solution!

Now that you have completed this tutorial, you are ready to modify these example flows and Node-RED Dashboard to build a Call for Code Water Sustainability solution.