Category

Announcement

Linux Foundation interview with NASA Astronaut Christina Koch

By Announcement, Blog, Event, OpenJS World

Jason Perlow, Editorial Director at the Linux Foundation, had a chance to speak with NASA astronaut Christina Koch. This year, she completed a record-breaking 328 days at the International Space Station for the longest single spaceflight by a woman and participated in the first all-female spacewalk with fellow NASA astronaut Jessica Meir. Christina gave a keynote at the OpenJS Foundation’s flagship event, OpenJS World on June 24, 2020, where she shared more on how open source JavaScript and web technologies are being used in space. This post can also be found on the Linux Foundation Foundation blog. 

JP: You spent nearly a year in space on the ISS, and you dealt with isolation from your friends and family, having spent time only with your crewmates. It’s been about three months for most of us isolating at home because of the COVID-19 pandemic. We haven’t been trained to deal with these types of things — regular folks like us don’t usually live in arctic habitats or space stations. What is your advice for people dealing with these quarantine-type situations for such long periods? 

CK: Well, I can sympathize, and it can be a difficult challenge even for astronauts, and it can be hard to work through and come up with strategies for. For me, the #1 thing was making sure I was in charge of the framework I used to view the situation. I flipped it around and instead about thinking about all the things I was missing out on and the things that I didn’t have available to me, I tried to focus on the unique things that I did have, that I would never have again, that I would miss one day. 

So every time I heard that thought in my head, that “I just wish I could…” whatever, I would immediately replace it with “this one thing I am experiencing I will never have again, and it is unique”. 

So the advice I have offered since the very beginning of the stay at home situation has been finding that thing about our current situation that you truly love that you’ll know you will miss. Recognize what you know is unique about this era, whether it is big, or small — whether it is philosophical or just a little part of your day — and just continually focus on that. The biggest challenge is we don’t know when this is going to be over, so we can quickly get into a mindset where we are continually replaying into our heads “when is this going to be over? I just want to <blank>” and we can get ourselves into a hole. If you are in charge of the narrative, and then flip it, that can really help.

I have to say that we are all experiencing quarantine fatigue. Even when it may have been fun and unique in the beginning — obviously, nobody wanted to be here, and nobody hopes we are in this situation going forward, but there are ways we can deal with it and find the silver lining. Right now, the challenge is staying vigilant, some of us have discovered those strategies that work, but some of us are just tired of working at them, continually having to be our best selves and bringing it every day. 

So you need to recommit to those strategies, but sometimes you need to switch it up — halfway through my mission, I changed every bit of external media that was available to me. We have folks that will uplink our favorite TV shows, podcasts, books and magazines, and other entertainment sources. I got rid of everything I had been watching and listening to and started fresh with a new palette. It kind of rejuvenated me and reminded me that there were new things I could feast my mind on and unique sensory experiences I could have. Maybe that is something you can do to keep it fresh and recommit to those strategies. 

JP: I am stuck at home here, in Florida, with my wife. When you were up in the ISS, you were alone, with just a couple of your crewmates. Were you always professional and never fought with each other, or did you occasionally have spats about little things?

CK: Oh my goodness, there were always little spats that could affect our productivity if we allowed it. I can relate on so many levels. Being on the ISS for eleven months, with a lot of the same people in a row, not only working side-by-side but also socializing on the weekends, and during meals at the end of the day. I can relate because my husband and I were apart for almost two years if you take into account my training in Russia, and then my flight. Of course, now, we are together 24 hours a day, and we are both fortunate enough that we can work from home. 

It is a tough situation, but at NASA, we all draw from a skill set called Expeditionary Behavior. It’s a fancy phrase to help us identify and avoid conflict situations and get out of those situations if we find ourselves in them. Those are things like communication — which I know we should all be doing our best at, as well as group living. But there are other things NASA brought up in our training are self-care, team care, leadership, and particularly, followership. Often, we talk about leadership as an essential quality, but we forget that followership and supporting a leader are also very important. That is important in any relationship, whether it is a family, a marriage, helping the other people on your team, even if it is an idea that they are carrying through that is for the betterment of the whole community or something like that. The self-care and team care are really about recognizing when people on your team or in your household may need support, knowing when you need that support, and being OK with asking for it and being clear about what needs you may have.

A common thread among all those lines is supporting each other. One way, in my opinion, the easiest way to get yourself out of feeling sorry for whatever situation you might be in is to think about the situation everyone else is in and what they might need. Asking someone else, “Hey, how are you doing today, what can I do for you?” is another way to switch that focus. It helped me on my mission, and it is helping me at home in quarantine and recognizing that it is not always easy. If you are finding that you have to try hard and dig deep to use some of these strategies, you are not alone — that is what takes right now. But you can do it, and you can get through it.

JP: I have heard that being in the arctic is not unlike being on another planet. How did that experience help you prepare for being in space, and potentially places such as the moon or even mars?

CK: I do think it is similar in a lot of ways. One, because of the landscape. It’s completely barren, very stark, and it is inhospitable. It gives us this environment to live where we have to remember that we are vulnerable, and we have to find ways to remain productive and not be preoccupied with that notion when doing our work. Just like on the space station, you can feel quite at home, wearing your polo shirt and Velcro pants, going about your day, and not recognizing that right outside that shell that you are in is the vacuum of space, and at any second, things could take a turn for the worse. 

In Antarctica and some of the Arctic areas that were very isolated, should you have a medical emergency, it can often be harder to evacuate or work on a person in those situations than even working on the ISS. At the ISS, you can undock and get back to earth in a matter of hours. At the south pole, weather conditions could prevent you from getting a medevac for weeks. In both situations, you have to develop strategies not to be preoccupied with the environmental concerns but still be vigilant to respond to them should something happen. That was something I took away from that experience — ways to not think about that too much, and to rely on your training should those situations arise. And then, of course, all the other things that living in isolation gives us.

The one thing that I found in that realm is something called sensory underload. And this is what your mind goes through when you see all the same people and faces, you keep staring at the same walls, you’ve tasted all the same food, and you’ve smelled all the same smells for so long. Your brain hasn’t been able to process something new for so long that it affects how we think and how we go about the world. In these situations, we might have to foster new sensory inputs and new situations and new things to process. NASA is looking into a lot of those things like reality augmentation for long-duration spaceflight, but in situations like the Arctic and Antarctic, even bringing in a care package, just to have new things in your environment can be so important when you are experiencing sensory underload. 

JP: The younger people reading this interview might be interested in becoming an astronaut someday. What should the current, or next generation — the Gen Y’s, the Gen Z’s — be thinking about doing today — to pursue a career as an astronaut? 

CK: I cannot wait to see what that generation does. Already they have been so impressive and so creative. The advice I have is to follow your passions. But in particular, what that means is to take that path that allows you to be your best self and contribute in the maximum possible way. The story I like to tell is that when I was in high school, I was a true space geek, and I went to space camp, and there we learned all the things you need to do to become an astronaut. 

There was a class on it, and they had a whiteboard with a checklist of what you should do — so everyone around me who wanted to be an astronaut was just scribbling this stuff down. And at that moment, I realized if I were ever to become an astronaut, I would want it to be because I pursued the things that I was naturally drawn to and passionate about, and hopefully, naturally good at. If one day that shaped me into someone who could contribute as an astronaut, only then would I become truly worthy of becoming one. So I waited until I felt I could make that case to apply to become an astronaut, and it led me to this role of focusing on the idea of contributing. 

The good news about following a path like that is even if you don’t end up achieving the exact dream that you may have. Whether that’s to become an astronaut or something else that may be very difficult to achieve, you’ve done what you’ve loved along the way, which guarantees that you will be successful and fulfilled. And that is the goal. Eyes on the prize, but make sure you are following the path that is right for you.

JP: Some feel that human-crewed spaceflight is an expensive endeavor when we have extremely pressing issues to deal with on Earth — climate change, the population explosion, feeding the planet, and recent problems such as the Coronavirus. What can we learn from space exploration that could potentially solve these issues at home on terra firma?

CK: It is a huge concern, in terms of resource allocation, so many things that are important that warrant our attention. And I think that your question, what can we learn from space exploration, is so important and there are countless examples — the Coronavirus, to start. NASA is studying how the immune system functions at a fundamental level for humans by the changes that occur in a microgravity environment. We’re studying climate change — numerous explorations, on the space station and other areas of NASA. Exploration is enabled by discovery and by technological advances. Where those take us, we can’t even determine. The camera in your smartphone or in your tablet was enabled by NASA technology. 

There are countless practical examples, but to me, the real answer is bigger than all of that — and what it can show us is what can be accomplished when we work together on a common goal and a shared purpose. There are examples of us overcoming things on a global scale in the past that seemed insurmountable at the beginning, such as battling the hole in the ozone layer. When that first came out, we had to study it, we had to come up with mitigation strategies, and they had to be adopted by the world, even when people were pointing out the potential economic drawbacks to dealing with it. 

But the problem was more significant than that, and we all got together, and we solved it. So looking towards what we can do when we work together with a unified purpose is really what NASA does for us on an even bigger scale. We talk about how exploration and looking into space is uplifting — I consider it to be uplifting for all across the spectrum. There are so many ways we can uplift people from all backgrounds. We can provide them with the tools to have what they need to reach their full potential, but then what? What is across that goal line? It is bigger things that inspire them to be their best, and that is how NASA can be uplifting for everyone, in achieving the big goals.

JP: So recently, NASA resumed human-crewed spaceflight using a commercial launch vehicle, the SpaceX Crew Dragon capsule. Do you feel that the commercialization of space is inevitable? Is the heavy lifting of the future going to come from commercial platforms such as SpaceX, Boeing, et cetera for the foreseeable future? And is the astronaut program always going to be a government-sponsored entity, or will we see private astronauts? And what other opportunities do you see in the private sector for startups to partner with NASA?

CK: For sure. I think that we are already seeing that the commercial aspect is playing out now, and it’s entirely a positive thing for me. You asked about private astronauts — there are already private astronauts training with a company, doing it at NASA through a partnership, and having a contract to fly on a SpaceX vehicle to the ISS through some new ways we are commercializing Low Earth Orbit. That’s already happening, and everyone I know is excited about it. I think anyone with curiosity, anyone who can carry dreams and hopes into space, and bring something back to Earth is welcome in the program.

I think that the model that NASA has been using for the last ten years to bring in commercial entities is ideal. We are looking to the next deeper set, going back to the moon, and then applying those technologies to go on to Mars. At the same time, we sort of foster and turn over the things we’ve already explored, such as Low Earth Orbit and bringing astronauts to and from the space station to foster a commercial space industry. To me, that strategy is perfect; a government organization can conduct that work that may not have that private motivation or the commercial incentives. Once it is incubated, then it is passed on, and that is when you see the commercial startups coming. 

The future is bright for commercialization in space, and I think that bringing in innovation that can happen when you pass off something to an entirely new set of designers is one of the most exciting aspects of this. One of the neat examples of that is SpaceX and their spacesuits — I heard that they did not consult with who we at NASA use as our spacesuit experts that have worked with us in the past. I think that is probably because they did not want to be biased by legacy hardware and legacy ways of doing things. They wanted to re-invent it from the start, to ensure that every aspect was re-thought and reengineered and done in a potentially new way. When you’ve been owners of that legacy hardware that’s difficult to do — especially in such a risky field and in a place where something tried and true has such a great magnetic draw. So, to break through the innovation barrier, bringing commercial partners onboard is so exciting and important.

JP: Let’s get to the Linux Foundation’s core audience here, developers. You were an engineer, and you used to program. What do you think the role of developers is in space exploration?

CK: Well, it cannot be understated. When I was in the space industry before becoming an astronaut, I was a developer of instrumentation for space probes. I built the little science gadgets and was typically involved in the sensor front-end, the intersection of the detectors’ physics and the electronics of the readouts. But that necessitated a lot of the testing, and it was fundamentals testing. Most of the programming I did was building up the GUIs for all the tests that we needed to run, and the I/O to talk to the instruments, to learn what it was telling us, to make sure it could function in a wide variety of environmental states and different inputs that it was expected to see, once it eventually got into space. 

That was just my aspect — and then there is all the processing of the data. If you think about astronomy, there is so much we know about the universe through different telescopes, space-based and ground-based, and one of the things we do is anticoincidence detection. We had to come up with algorithms that detect only the kind of particles or on wavelengths that we want to identify, and not the ones that deposit energy in different ways that we are trying to study. Even just the algorithms to suss out that tiny aspect of what those kinds of X-Ray detectors on those telescopes do, is entirely software-intensive. Some of it is actual firmware because it has to happen so quickly, in billionths of a second, but basically, the software enables the entire industry, whether it is the adaptive optics that allow us to see clearly, or the post-processing, or even just the algorithms we use to refine and do the R&D, it’s everywhere, and it is ubiquitous. The first GUIs I ever wrote were on a Linux system using basic open source stuff to talk to our instruments. As far as I know, there is no single person who can walk into any job at NASA and have no programming experience. It’s everywhere.

JP: Speaking of programming and debugging, I saw a video of you floating around in the server room on the ISS, which to me looked like a bunch of ThinkPad laptops taped to a bulkhead and sort of jury-rigged networked there. What’s it like to debug technical problems in space with computer systems and dealing with various technical challenges? It’s not like you can call Geek Squad, and they are going to show up in a van and fix your server if something breaks. What do you do up there?

CK: That is exactly right, although there is only one thing that is inaccurate about that statement — those Lenovos are Velcroed to the wall, not taped (laugh). We rely on the experts on the ground as astronauts. Interestingly, for the most part, just like an IT department, just like at any enterprise, the experts, for the most part, can remotely login to our computers, even though they are in space. That still happens. But if one of the servers is completely dead, they call on us to intercede, we’ve had to re-image drives, and do hardware swaps.

JP: OK, a serious question, a religious matter. Are you a Mac or a PC user, an iOS or an Android user, and are you a cat or a dog person? These are crucial questions; you could lose your whole audience if you answer this the wrong way, so be careful.

CK: I am terrified right now. So the first one I get to sidestep because I have both a Mac and a PC. I am fluent in both. The second — Android all the way. And as the third, I thought I was a cat person, but since I got my dog Sadie, I am a dog person. We don’t know what breed she is since she is from the Humane Society and is a rescue, so we call her an LBD — a Little Brown Dog. She is a little sweetheart, and I missed her quite a bit on my mission.

JP: Outside of being an astronaut, I have heard you have already started to poke around GitHub, for your nieces and nephews. Are there any particular projects you are interested in? Any programming languages or tools you might want to learn or explore?

CK: Definitely. Well, I want to learn Python because it is really popular, and it would help out with my Raspberry Pi projects. The app that I am writing right now in Android Studio, which I consulted on with my 4-year-old niece, who wanted a journal app. I’m not telling anyone my username on GitHub because I am too embarrassed about what a terrible coder I am. I wouldn’t want anyone to see it, but it will be uploaded there. Her brother wants the app too, so that necessitated the version control. It’s just for fun, for now, having missed that technical aspect from my last job. I do have some development boards, and I do have various home projects and stuff like that.

JP: In your keynote, you mentioned that the crew’s favorite activity in space is pizza night. What is your favorite food or cuisine, and is there anything that you wished you could eat in space that you can’t?

CK: My favorite food or cuisine on Earth is something you can’t have in space, sushi, or poke, all the fresh seafood type things that I got introduced to from living in American Samoa and visiting Hawaii and places like that, I missed those. All the food we have in space is rehydrated, or from MREs, so it doesn’t have a lot of texture, it has to have the consistency of like mac and cheese or something like that. So what I really missed is chips — especially chips and salsa. Because anything crunchy is going to crumble up is going to go everywhere. So we don’t have anything crunchy. Unfortunately, I have eaten enough to have made up for without chips and salsa since I was back. 

JP: Thank you very much, Christina, for your time and insights! Great interview.

Watch Christina’s full keynote here:

OpenJS World Day One Highlights

By Announcement, Blog, Event, OpenJS World, Project Updates

Today was the first day of OpenJS World, the OpenJS Foundation’s virtual, global event bringing together the JavaScript and web development community. Today was filled with incredible talks and keynotes and while a little different than what we are used to, it offered us all a chance to learn and grow, virtually. We are thrilled to have gathered viewers from across the globe to our event and we hope you enjoyed Day One as much as we did! For those who did not get to attend today’s event, there’s still time to register for Day Two, and replays will be available in both the event platform and on YouTube next week.

Keynotes

Day one kicked off with Robin Ginn, OpenJS Foundation Executive Director welcoming everyone and going over a brief JavaScript History as well as an overview of our Foundation. We also heard from Anil Dash, CEO of Glitch, on tech and inclusivity. He asked some great questions of our community on biases built within tech due to a lack of diversity from those doing the building.

Other keynote speakers included 

  • Cassidy Williams, Principal Developer Experience Engineer, Netlify who gave her keynote on, “Learning By Teaching for Your Community”
  • Prosper Otemuyiwa, Co-founder & CTO, Eden who talked about “Media Performance at Scale.”
  • Keeley Hammond, Senior Software Engineer, InVision, who spoke about Electron’s journey as an OpenJS Foundation hosted project. 
  • Malte Ubl, Principal Engineer, Google, spoke about the AMP project
  • Dr. Joy Rankin, Research Lead at the AI Now Institute and Research Scholar at New York University, sat down with Kris Borchers to discuss “How (not) to Save the World with Tech”

OpenJS World Project News

We are thrilled to share that both AMP and Electron have graduated from the incubation program!

AMP Project Graduates Incubation Program

Today, during OpenJS World keynotes, Malte Ubl, Principal Engineer at Google, the creator of AMP, and a member of the AMP Project’s Technical Steering Committee, announced the AMP Project has graduated from the Foundation’s incubation program. AMP entered incubation in October of 2019 and during this time, the collaboration between the project and the Foundation has been very beneficial. Graduating from the OpenJS Foundation Incubation program signals more opportunities for growth and diversity for the open source AMP project and its developers. In becoming a full-fledged OpenJS Foundation project, AMP can better deliver on its vision of delivering  “A strong, user-first open web forever.” 

Electron Project Graduates Incubation Program

Today at OpenJS World, Keeley Hammond, Senior Software Engineer at InVision, and a member of the Electron governance team, took the keynote stage and let the world know that Electron has successfully graduated from the Foundation’s incubation program and is now an Impact Project. Electron entered incubation in December of 2019, at the last OpenJS Foundation global conference in Montreal. This is an important step as it shows real growth, maturity, and stability for the popular web framework, which is used for building desktop apps across multiple platforms. 

Sessions 

Today we featured more than 30 breakout sessions across a variety of topics from AI to application development and project-specific talks. A replay of each of these talks is available within the OpenJS World event platform. You will need to register for the event or login to the platform to access these sessions. To find the replay, navigate to the home page, click into the topic area, and find the talks on demand. We are also posting on the OpenJS YouTube on Monday June 29, 2020.

Engaging Virtually Through Fun and Games

We’ve created a few opportunities for fun during these two days. Attendees can create a virtual badge, collect badges for sessions attended, and even earn points for cool OpenJS SWAG! We also held a scavenger hunt today where attendees had to search through sponsor’s booths and find birthday related items, in honor on JavaScript’s 25th Anniversary! Participants who collected all birthday party items will be entered into a drawing for a DJI Tello drone, provided by IBM’s Call for Code team! Learn more in the Fun and Games section on the event platform.

OpenJS World Day Two, and Collab Summit

We are just getting started this week! Please join us tomorrow as we kick off our keynote sessions with Christina Koch, NASA Astronaut! Tomorrow will be another fantastic day, a trend to continue into the OpenJS Collab Summit on Thursday (Project Day) and Friday (Cross-Project Day).

Thanks

Finally, and certainly not least of all, we send our sincerest THANK YOU to our sponsors who have made this event possible. This year has been challenging for so many and having sponsors come through and support this event is extremely appreciated. 

Thanks to Diamond Sponsor IBM, Gold Sponsors Cloud Native Computing Foundation and Google, Silver Sponsors Red Hat/OpenShift and SoftwareAG, Bronze Sponsors Heroku, Profound Logic, Sentry and White Source.

OpenJS World – Featured Profile – Beth Griggs

By Announcement, Blog, Event, Node.js, OpenJS World

Since 2016, Beth Griggs has been working as an Open Source Engineer at IBM where she focuses on the Node.js runtime. Node.js is an impact project in the OpenJS Foundation. Beth is a Node.js Technical Steering Committee Member and a member of the Node.js Release Working Group where she is involved with auditing commits for the long-term support (LTS) release lines and the creation of releases. 

What was your first experience of Node.js?

I joined the party a little late, my first experience of Node.js was while completing my final-year engineering project for my Bachelor’s degree in 2016. My engineering project was to create a ‘living meta-analysis’ tool that would enable researchers, specifically psychologists, to easily combine and update findings from related independent studies. I originally implemented the tool using a PHP framework, but after some time I realized I wasn’t enjoying the developer experience and hitting limitations with the framework. Half-way through my final year of university, I heard some classmates raving about Node.js, so I decided to check it out. Within a few weeks, I had reimplemented my project from scratch using Node.js.

How did you start contributing to Node.js?

I rejoined IBM in 2016, having spent my gap-year prior to university at IBM as Java Test Engineer in their WebSphere organization. I joined the Node.js team in IBM Runtime Technologies who at the time were responsible for building and testing the IBM SDK for Node.js. From running the Node.js test suite regularly internally, my team identified flaky tests that needed fixing out in the community – which turned in to some of my first contributions to Node.js core.

Over the next few years, our team deprecated the IBM SDK for Node.js in favor of maintaining these platforms directly in the Node.js community.  Around the same time, Myles Borins offered to mentor me to become involved with the Release Working Group, with a view of becoming a Node.js releaser (Thanks Myles!). Since then, that’s the area of Node.js where most of my contributions have been focused.

What has changed since you first started to contributing to Node.js?

One of the biggest changes is the emphasis on onboarding new contributors to major parts of the project. Getting new names and faces onboarded in a position where they can actively contribute to Node.js, and also an increase in socializing how people can contribute in ways other than code. 

Documentation of the internal contributor processes has improved a lot too, but there’s still room to improve.

What are you most excited about with the Node.js project at the moment?

I’m really enjoying the work that is happening in pkgjs GitHub organization where we’re building tools for package maintainers. I’m excited to see the tools that come out of pkgjs organization and the Node.js Package Maintenance team.

What are you most looking forward to at OpenJS World?

There are so many great talks (although, I’m a little bias as I was in the content team). I’m really looking forward to the keynote with Christina H Koch, a NASA astronaut. And also, the ‘Broken Promises’ workshop by James and Matteo from NearForm.

On the Cross Project Summit day, I’m looking forward to the Node.js Package Maintenance session. We’ve got a lot of momentum in that working group at the moment and it’ll be great to have input from the other OpenJS projects. I’m hoping my talk “Chronicles of the Node.js Ecosystem: The Consumer, The Author, and The Maintainer” is a good primer for the session. 

I’ll also be at the IBM virtual booth throughout the conference and catching my colleagues’ talks (https://developer.ibm.com/technologies/node-js/blogs/ibm-at-openjs-world-2020). 

What does your role at IBM include other than contributing to the Node.js community?

A wide variety of things really, no week is ever full of the same tasks. I’m often preparing talks and workshops for various conferences. Alongside that, I spend my time researching common methods and best practices for deploying Node.js applications to the cloud – specifically focusing on IBM Cloud and OpenShift. I often find myself assisting internal teams with their usage of Node.js, and analyzing various IBM offerings from a typical Node.js Developer’s point of view and providing feedback. I’m also scrum master for my team, so a portion of my time is taken up with those responsibilities too. 

What do you do outside of work?

Most often hanging out with my dog, Laddie. I’m a DIY enthusiast – mainly painting or upcycling various pieces of second-hand furniture. Since the start of lockdown in the UK, I have also been writing a book which is a convenient pass time. Big fan of replaying my old PS1 games too. 

Where should people go to get started contributing to the Node.js Project? 

Go to https://www.nodetodo.org/, which is a website that walks you through a path towards your first contribution to Node.js. As long as you’re a little bit familiar with Node.js, you can start here. The other option is to look for labels on repositories in the Node.js GitHub organization tagged with ‘Good first issue’. 

Alternatively, you can join one of our working group sessions on Zoom and start participating in discussions. The sessions are listed in the nodejs.org calendar. If you’re specifically interested in the Node.js Release Working Group, I run fortnightly mentoring/shadowing sessions that you’re welcome to join.

Node.js Certifications update: Node.js 10 to Node.js 12

By Announcement, Blog, Certification

The OpenJS Node.js Application Developer (JSNAD) and the OpenJS Node.js Services Developer (JSNSD) Exams will be updated from Node.js version 10, which is now in maintenance, to Node.js version 12, which is the current LTS (Long Term Support) line. Changes will come into effect June 16, 2020. All tests taking place after 8:00 am PT on June 16, 2020 will be based on Node.js version 12.

These exams are evergreen and soon after a Node.js version becomes the only LTS line the certifications are updated to stay in lockstep with that LTS version. Now that Node.js version 10 has moved into maintenance, certifications will be based on Node.js version 12. 

While there are no changes to the current set of Domains and Competencies for the JSNAD and JSNAD Exams, candidates are advised to review functionality of libraries or frameworks on Node.js version 12. For a full list of differences between Node.js version 10 and Node.js version 12 see https://nodejs.org/ca/blog/uncategorized/10-lts-to-12-lts/.

OpenJS Foundation welcomes two new board members from Google and Joyent

By Announcement, Blog

The OpenJS Foundation family is growing! We are happy to welcome two new members to the Board: Sonal Bhoraniya from Google and Sean Johnson from Joyent to its Board of Directors.

Sonal Bhoraniya is an attorney on Google’s Open Source Compliance Team, ensuring compliance with open source policies and adherence to the collaborative, open source culture.  Sonal believes in the value of maintaining and contributing to open source communities. In addition to her legal acumen, Sonal is a JavaScript/ Node.js developer. Sonal earned her Bachelor’s in Science from Duke and her JD from Vanderbilt. In her role on the Board, Sonal is excited to support and help ensure the healthy and sustainable growth of the OpenJS community.

Sean Johnson, joining from Joyent, leads Joyent’s Commercial Group, a subsidiary of Samsung, covering a variety of diverse open source projects, products, and services. Sean is an OSS-first product leader and advocate for vibrant and productive open source communities. In his role as a member of the board, Sean hopes to accelerate social and community equity in the OpenJS ecosystem through enablement, collaboration and shared values. Sean earned a Bachelor of Arts from Vanderbilt.

The OpenJS Foundation is looking forward to the contributions of both Sonal and Sean and honored to have them serve as Platinum Directors. 

OpenJS Foundation Joins Open Source Initiative as Newest Affiliate Member

By Announcement, Blog

Membership emphasizes growing outreach and engagement with broader software and technology communities.

OpenJS Foundation Logo

PALO ALTO, Calif., June 11, 2020 — The Open Source Initiative® (OSI), the international authority in open source licensing, is excited to announce the affiliate membership of the OpenJS Foundation, the premier home for critical open source JavaScript projects, including Appium, Dojo, jQuery, Node.js, and webpack, and 30 more. The OpenJS membership with the OSI highlights the incredible impact of JavaScript across all industries, web technologies, communities, and, ultimately, the open source software movement.

“The OpenJS Foundation is thrilled to join OSI as an Affiliate Member and we’re proud to have Myles Borins represent our JavaScript communities, ” said Robin Ginn, OpenJS Foundation Executive Director. “In addition to all of our projects using OSI-approved licenses, our neutral organization shares common principles with the OSI, including technical governance and accountability. As an Affiliate Member, we can better advance open source development methodologies for individual open source projects and the ecosystem as a whole.” 

Formed through a merger of the JS Foundation and Node.js Foundation in 2019, the OpenJS Foundation supports the healthy growth of JavaScript and web technologies by providing a neutral organization to host and sustain projects, as well as collaboratively fund activities that benefit the ecosystem as a whole. Originally developed in 1995, JavaScript is now arguably the most widely used programming language, with Github reporting it as the “Top Language” from 2014 through 2019 in their State of the Octoverse report. Javascript’s growth and popularity can be attributed to its accessibility, often identified as a language for new developers, and its applicability, a core component of today’s web-driven technology landscape. JavaScript also serves as a conduit to, and proof of concept for, open source software development, projects, and communities. For some, JavaScript provides their first experience in open source development and communities, and for others, their experience in JavaScript projects and communities are helping to lead and further refine the larger open source movement itself.

The OpenJS Foundation serves as a valuable resource for both new JavaScript developers and emerging projects–offering a foundation for support and growth–as well as veterans with broad experience with mature projects–providing a platform to extend best practices and guiding principles out to the broader open source software community.

Working with the OpenJS Foundation provides the Open Source Initiative a unique opportunity to engage with one of the open source software movement’s largest and most influential communities. JavaScript developers and projects are deeply committed to open source as a development model and its ethos of co-creation through collaboration and contribution, making OpenJS Foundations affiliate membership and the community they represent a critical partnership for not only open source’s continued growth and development but the OSI as well.

 “We are thrilled to welcome aboard OpenJS as an OSI Affiliate Member, ” said Tracy Hinds, Chief Financial Officer of OSI. “It is a time in open source where it’s vital to learn from and be challenged by the growing concerns about sustainability. We look to OpenJS as a great partner in iterating over the questions to be asking in how projects are building, maintaining, and sustaining open source software.”

The OSI Affiliate Member Program, available at no-cost, allows non-profit organizations to join and support the OSI’s work to promote and protect open source software. Affiliate members participate directly in the direction and development of the OSI through board elections and incubator projects that support software freedom. Membership provides a forum where open source leaders, businesses, and communities engage through member-driven initiatives to increase awareness and adoption of open source software.

About OpenJS Foundation 

The OpenJS Foundation (https://openjsf.org/) is committed to supporting the healthy growth of the JavaScript ecosystem and web technologies by providing a neutral organization to host and sustain projects, as well as collaboratively fund activities for the benefit of the community at large. The OpenJS Foundation is made up of 35 open source JavaScript projects including Appium, Dojo, jQuery, Node.js, and webpack and is supported by 30 corporate and end-user members, including GoDaddy, Google, IBM, Intel, Joyent, and Microsoft. These members recognize the interconnected nature of the JavaScript ecosystem and the importance of providing a central home for projects which represent significant shared value. 

About the Open Source Initiative

For over 20 years, the Open Source Initiative (https://opensource.org/) has worked to raise awareness and adoption of open source software, and build bridges between open source communities of practice. As a global non-profit, the OSI champions software freedom in society through education, collaboration, and infrastructure, stewarding the Open Source Definition (OSD), and preventing abuse of the ideals and ethos inherent to the open source movement.

Project Update: Testing in parallel with Mocha v8.0.0

By Announcement, Blog, Mocha

Use parallel mode to achieve significant speedups for large test suites

This blog was written by Christopher Hiller and was originally posted on the IBM Developer Blog. Mocha is a hosted project at the OpenJS Foundation.

With the release of Mocha v8.0.0, Mocha now supports running in parallel mode under Node.js. Running tests in parallel mode allows Mocha to take advantage of multi-core CPUs, resulting in significant speedups for large test suites.

Read about parallel testing in Mocha’s documentation.

Before v8.0.0, Mocha only ran tests in serial: one test must finish before moving on to the next. While this strategy is not without benefits — it’s deterministic and snappy on smaller test suites — it can become a bottleneck when running a large number of tests.

Let’s take a look at how to take advantage of parallel mode in Mocha by enabling it on a real-world project: Mocha itself!

Installation

Node.js v8.0.0 has reached End-of-Life; Mocha v8.0.0 requires Node.js v10, v12, or v14.

Mocha doesn’t need to install itself, but you might. You need Mocha v8.0.0 or newer, so:

npm i mocha@8 --save-dev

Moving right along…

Use the – – parallel flag

In many cases, all you need to do to enable parallel mode is supply --parallel to the mocha executable. For example:

mocha --parallel test/*.spec.js

Alternatively, you can specify any command-line flag by using a Mocha configuration file. Mocha keeps its default configuration in a YAML file, .mocharc.yml. It looks something like this (trimmed for brevity):

# .mocharc.yml
require: 'test/setup'
ui: 'bdd'
timeout: 300

To enable parallel mode, I’m going to add parallel: true to this file:

# .mocharc.yml w/ parallel mode enabled
require: 'test/setup'
ui: 'bdd'
timeout: 300
parallel: true

Note: Examples below use --parallel and --no-parallel for the sake of clarity.

Let’s run npm test and see what happens!

Spoiler: It didn’t work the first time

Oops, I got a bunch of “timeout” exceptions in the unit tests, which use the default timeout value (300ms, as shown above). Look:

  2) Mocha
       "before each" hook for "should return the Mocha instance":
     Error: Timeout of 300ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. (/Users/boneskull/projects/mochajs/mocha/test/node-unit/mocha.spec.js)
      at Hook.Runnable._timeoutError (lib/runnable.js:425:10)
      at done (lib/runnable.js:299:18)
      at callFn (lib/runnable.js:380:7)
      at Hook.Runnable.run (lib/runnable.js:345:5)
      at next (lib/runner.js:475:10)
      at Immediate._onImmediate (lib/runner.js:520:5)
      at processImmediate (internal/timers.js:456:21)

That’s weird. I run the tests a second time, and different tests throw “timeout” exceptions. Why?

Because of many variables — from Mocha to Node.js to the OS to the CPU itself — parallel mode exhibits a much wider range of timings for any given test. These timeout exceptions don’t indicate a newfound performance issue; rather, they’re a symptom of a naturally higher system load and nondeterministic execution order.

To resolve this, I’ll increase Mocha’s default test timeout from 300ms (0.3s) to 1000ms (1s):

# .mocharc.yml
# ...
timeout: 1000

Mocha’s “timeout” functionality is not to be used as a benchmark; its intent is to catch code that takes an unexpectedly long time to execute. Since we now expect tests to potentially take longer, we can safely increase the timeout value.

Now that the tests pass, I’m going to try to make them pass more.

Optimizing parallel mode

By default, Mocha’s maximum job count is n – 1, where n is the number of CPU cores on the machine. This default value will not be optimal for all projects. The job count also does not imply that “Mocha gets to use n – 1 CPU cores,” because that’s up to the operating system. It is, however, a default, and it does what defaults do.

When I say “_maximum job count” I mean that Mocha could spawn this many worker processes if needed. It depends on the count and execution time of the test files._

To compare performance, I use the friendly benchmarking tool, hyperfine; I’ll use this to get an idea of how various configurations will perform.

Regarding hyperfine usage: In the below examples, I’m passing two options to hyperfine-r 5 for “runs,” which runs the command five (5) times. The default is ten (10), but this is slow, and I’m impatient. The second option I pass is --warmup 1, which performs a single “warmup” run. The result of this run is discarded. Warmup runs reduce the chance that the first _k runs will be significantly slower than the subsequent runs, which may skew the final result. If this happens, hyperfine will even warn you about it, which is why I’m using it!
If you try this yourself, you need to replace bin/mocha with node_modules/.bin/mocha or mocha, depending on your environment; bin/mocha is the path to the mocha executable relative to the working copy root._

Mocha’s integration tests (about 260 tests over 55 files) typically make assertions about the output of the mocha executable itself. They also need a longer timeout value than the unit tests; below, we use a timeout of ten (10) seconds.

I run the integration tests in serial. Nobody ever claimed they ran at ludicrous speed:

$ hyperfine -r 5 --warmup 1 "bin/mocha --no-parallel --timeout 10s test/integration/**/*.spec.js"
Benchmark #1: bin/mocha --no-parallel --timeout 10s test/integration/**/*.spec.js
  Time (mean ± σ):     141.873 s ±  0.315 s    [User: 72.444 s, System: 14.836 s]
  Range (min … max):   141.447 s … 142.296 s    5 runs

That’s over two (2) minutes. Let’s try it again in parallel mode. In my case, I have an eight-core CPU (n = 8), so by default, Mocha uses seven (7) worker processes:

$ hyperfine -r 5 --warmup 1 "bin/mocha --parallel --timeout 10s test/integration/**/*.spec.js"
Benchmark #1: bin/mocha --parallel --timeout 10s test/integration/**/*.spec.js
  Time (mean ± σ):     65.235 s ±  0.191 s    [User: 78.302 s, System: 16.523 s]
  Range (min … max):   65.002 s … 65.450 s    5 runs

Using parallel mode shaves 76 seconds off of the run, down to just over a minute! That’s almost a 53% speedup. But, can we do better?

I can use the --jobs/-j option to specify exactly how many worker processes Mocha will potentially use. Let’s see what happens if I reduce this number to four (4):

$ hyperfine -r 5 --warmup 1 "bin/mocha --parallel --jobs 4 --timeout 10s test/integration/**/*.spec.js"
Benchmark #1: bin/mocha --parallel --jobs 4 --timeout 10s test/integration/**/*.spec.js
  Time (mean ± σ):     69.764 s ±  0.512 s    [User: 79.176 s, System: 16.774 s]
  Range (min … max):   69.290 s … 70.597 s    5 runs

Unfortunately, that’s slower. What if I increased the number of jobs, instead?

$ hyperfine -r 5 --warmup 1 "bin/mocha --parallel --jobs 12 --timeout 10s test/integration/**/*.spec.js"
Benchmark #1: bin/mocha --parallel --jobs 12 --timeout 10s test/integration/**/*.spec.js
  Time (mean ± σ):     64.175 s ±  0.248 s    [User: 80.611 s, System: 17.109 s]
  Range (min … max):   63.809 s … 64.400 s    5 runs

Twelve (12) is ever-so-slightly faster than the default of seven (7). Remember, my CPU has eight (8) cores. Why does spawning more processes increase performance?

I speculate it’s because these tests aren’t CPU-bound. They mostly perform asynchronous I/O, so the CPU has some spare cycles waiting for tasks to complete. I could spend more time trying to squeeze another 500ms out of these tests, but for my purposes, it’s not worth the bother. Perfect is the enemy of good, right? The point is to illustrate how you can apply this strategy to your own projects and arrive at a configuration you’re happy with.

When to avoid parallel mode

Would you be shocked if I told you that running tests in parallel is not always appropriate? No, you would not be shocked.

It’s important to understand two things:

  1. Mocha does not run individual tests in parallel. Mocha runs test files in parallel.
  2. Spawning worker processes is not free.

That means if you hand Mocha a single, lonely test file, it will spawn a single worker process, and that worker process will run the file. If you only have one test file, you’ll be penalized for using parallel mode. Don’t do that.

Other than the “lonely file” non-use-case, the unique characteristics of your tests and sources will impact the result. There’s an inflection point below which running tests in parallel will be slower than running in serial.

In fact, Mocha’s own unit tests (about 740 tests across 35 files) are a great example. Like good unit tests, they try to run quickly, in isolation, without I/O. I’ll run Mocha’s unit tests in serial, for the baseline:

$ hyperfine -r 5 --warmup 1 "bin/mocha --no-parallel test/*unit/**/*.spec.js"
Benchmark #1: bin/mocha --no-parallel test/*unit/**/*.spec.js
  Time (mean ± σ):      1.262 s ±  0.026 s    [User: 1.286 s, System: 0.145 s]
  Range (min … max):    1.239 s …  1.297 s    5 runs

Now I’ll try running them in parallel. Despite my hopes, this is the result:

$ hyperfine -r 5 --warmup 1 "bin/mocha --parallel test/*unit/**/*.spec.js"
Benchmark #1: bin/mocha --parallel test/*unit/**/*.spec.js
  Time (mean ± σ):      1.718 s ±  0.023 s    [User: 3.443 s, System: 0.619 s]
  Range (min … max):    1.686 s …  1.747 s    5 runs

Objectively, running Mocha’s unit tests in parallel slows them down by about half a second. This is the overhead of spawning worker processes (and the requisite serialization for inter-process communication).

I’ll go out on a limb and predict that many projects having very fast unit tests will see no benefit from running those tests in Mocha’s parallel mode.

Remember my .mocharc.yml? I yanked that parallel: true out of there; instead, Mocha will use it only when running its integration tests.

In addition to being generally unsuitable for these types of tests, parallel mode has some other limitations; I’ll discuss these next.

Caveats, disclaimers and gotchas, Oh my

Due to technical limitations (i.e., “reasons”), a handful of features aren’t compatible with parallel mode. If you try, Mocha will throw an exception.

Check out the docs for more information and workarounds (if any).

Unsupported reporters

If you’re using the markdownprogress, or json-stream reporters, you’re out of luck for now. These reporters need to know how many tests we intend to execute up front, and parallel mode does not have that information.

This could change in the future, but would introduce breaking changes to these reporters.

Exclusive tests

Exclusive tests (.only()) do not work. If you try, Mocha runs tests (as if .only() was not used) until it encounters usage of .only(), at which point it aborts and fails.

Given that exclusive tests are typically used in a single file, parallel mode is also unsuitable for this situation.

Unsupported options

Incompatible options include --sort--delay, and importantly, --file. In short, it’s because we cannot run tests in any specific order.

Of these, --file likely impacts the greatest number of projects. Before Mocha v8.0.0, --file was recommended to define “root hooks.” Root hooks are hooks (such as beforeEach()after()setup(), etc.) which all other test files will inherit. The idea is that you would define root hooks in, for example, hooks.js, and run Mocha like so:

mocha --file hooks.js "test/**/*.spec.js"

All --file parameters are considered test files and will be run in order and before any other test files (in this case, test/**/*.spec.js). Because of these guarantees, Mocha “bootstraps” with the hooks defined in hooks.js, and this affects all subsequent test files.

This still works in Mocha v8.0.0, but only in serial mode. But wait! Its use is now strongly discouraged (and will eventually be fully deprecated). In its place, Mocha has introduced Root Hook Plugins.

Root Hook Plugins

Root Hook Plugins are modules (CJS or ESM) which have a named export, mochaHooks, in which the user can freely define hooks. Root Hook Plugin modules are loaded via Mocha’s --require option.

Read the docs on using Root Hook Plugins

The documentation (linked above) contains a thorough explanation and more examples, but here’s a straightforward one.

Say you have a project with root hooks loaded via --file hooks.js:

// hooks.js
beforeEach(function() {
  // do something before every test
  this.timeout(5000); // trivial example
});

To convert this to a Root Hook Plugin, change hooks.js to be:

// hooks.js
exports.mochaHooks = {
  beforeEach() {
    this.timeout(5000);
  }
};

Tip: This can be an ESM module, for example, hooks.mjs; use a named export of mochaHooks.

When calling the mocha executable, replace --file hooks.js with --require hooks.js. Nifty!

Troubleshooting parallel mode

While parallel mode should just work for many projects, if you’re still having trouble, refer to this checklist to prepare your tests:

  • ✅ Ensure you are using a supported reporter.
  • ✅ Ensure you are not using other unsupported flags.
  • ✅ Double-check your config file; options set in config files will be merged with any command-line option.
  • ✅ Look for root hooks (they look like this) in your tests. Move them into a root hook plugin.
  • ✅ Do any assertion, mock, or other test libraries you’re consuming use root hooks? They may need to be migrated for compatibility with parallel mode.
  • ✅ If tests are unexpectedly timing out, you may need to increase the default test timeout (via --timeout)
  • ✅ Ensure your tests do not depend on being run in a specific order.
  • ✅ Ensure your tests clean up after themselves; remove temp files, handles, sockets, etc. Don’t try to share state or resources between test files.

What’s next

Parallel mode is new and not perfect; there’s room for improvement. But to do so, Mocha needs your help. Send the Mocha team your feedback! Please give Mocha v8.0.0 a try, enable parallel mode, use Root Hook Plugins, and share your thoughts.

Dojo turns 16, New Dojo 7 Delivers Suite of Reactive Material Widgets

By Announcement, Blog, Dojo

Dojo, an OpenJS Foundation Impact Project, just hit a new milestone. Dojo 7 is a progressive framework for modern web applications built with TypeScript. That means Dojo is an essential tool for building modern websites. The Dojo framework scales easily and allows building anything from simple static websites all the way up to enterprise-scale single-page reactive web applications. 

Dojo 7 Widgets takes a step forward in out-of-the-box usability, adding 20+ new widgets and a Material theme that developers can use to build feature-rich applications even faster, including new widgets that are consistent, usable, and easily accessible for important website building blocks like cards, passwords, forms, and more. 

See the Dojo Widgets documentation and examples for more information. 

Dojo’s no flavor-of-the-month JavaScript framework. The Dojo Toolkit was started in 2004 as part of a non-profit organization that was established to promote the adoption of the Dojo Toolkit. In 2016, the foundation merged with jQuery Foundation to become JS Foundation. Then in March 2019 the JS Foundation merged with the Node Foundation to become the OpenJS Foundation. Dojo, therefore, gives the OpenJS Foundation organizational roots that predate the iPhone.

In 2018, modern Dojo arrived with Dojo 2, a complete rewrite and rethink of Dojo into its current form of a lean modern TypeScript-first, batteries included progressive framework. Aligning with modern standards and best practices, the resulting distribution build of Dojo can include zero JavaScript code for statically built websites, or as little as 13KB of compressed JavaScript for full-featured web apps.

Dojo has been used widely over the years by companies such as Cisco, JP Morgan, Esri, Intuit, ADP, Fannie Mae, Daimler, and many more.  Applications created with the Dojo Toolkit more than 10 years ago still work today with only minor adjustments and upgrades.

Modern Dojo is open source software available under the modified BSD license. Developers can try modern Dojo from Code Sandbox, or install Dojo via npm:

npm i @dojo/cli @dojo/cli-create-app -g

Create your first app

dojo create app --name hello-world

Get started with widgets

npm install @dojo/widgets 

Visit dojo.io for documentation, tutorials, cookbooks, and other materials. Read Dojo’s blog on this new release here.

How The Weather Company uses Node.js in production

By Announcement, Blog, ESLint, member blog, Node.js

Using Node.js improved site speed, performance, and scalability

This piece was written by By Noel Madali and originally appeared on the IBM Developer Blog. IBM is a member of the OpenJS Foundation.

The Weather Company uses Node.js to power their weather.com website, a multinational weather information and news website available in 230+ locales and localized in about 60 languages. As an industry leader in audience reach and accuracy, weather.com delivers weather data, forecasts, observations, historical data, news articles, and video.

Because weather.com offers a location-based service that is used throughout the world, its infrastructure must support consistent uptime, speed, and precise data delivery. Scaling the solution to billions of unique locations has created multiple technical challenges and opportunities for the technical team. In this blog post, we cover some of the unique challenges we had to overcome when building weather.com and discuss how we ended up using Node.js to power our internationalized weather application.

Drupal ‘n Angular (DNA): The early days

In 2015, we were a Drupal ‘n Angular (DNA) shop. We unofficially pioneered the industry by marrying Drupal and Angular together to build a modular, content-based website. We used Drupal as our CMS to control content and page configuration, and we used Angular to code front-end modules.

Front-end modules were small blocks of user interfaces that had data and some interactive elements. Content editors would move around the modules to visually create a page and use Drupal to create articles about weather and publish it on the website.

DNA was successful in rapidly expanding the website’s content and giving editors the flexibility to create page content on the fly.

As our usage of DNA grew, we faced many technical issues which ultimately boiled down to three main themes:

  • Poor performance
  • Instability
  • Slower time for developers to fix, enhance, and deploy code (also known as velocity)

Poor performance

Our site suffered from poor performance, with sluggish load times and unreliable availability. This, in turn, directly impacted our ad revenue since a faster page translated into faster ad viewability and more revenue generation.

To address some of our performance concerns, we conducted different front-end experiments.

  • We analyzed and evaluated modules to determine what we could change. For example, we evaluated getting rid of some modules that were not used all the time or we rewrote modules so they wouldn’t use giant JS libraries.
  • We evaluated our usage of a tag manager in reference to ad serving performance.
  • We added lazy-loaded modules to remove the module on first load in order to reduce the amount of JavaScript served to the client.

Instability

Because of the fragile deployment process of using Drupal with Angular, our site suffered from too much downtime. The deployment process was a matter of taking the name of a git branch and entering it into a UI to get released into different environments. There was no real build process, but only version control.

Ultimately, this led to many bad practices that impacted developers including lack of version control methodology, non-reproduceable builds, and the like.

Slower developer velocity

The majority of our developers had front-end experience, but very few them were knowledgeable about the inner workings of Drupal and PHP. As such, features and bug fixes related to PHP were not addressed as quickly due to knowledge gaps.

Large deployments contributed to slower velocity as well as stability issues, where small changes could break the entire site. Since a deployment was the entire codebase (Drupal, Drupal plugins/modules, front-end code, PHP scripts, etc), small code changes in a release could easily get overlooked and not be properly tested, breaking the deployment.

Overall, while we had a few quick wins with DNA, the constant regressions due to the setup forced us to consider alternative paths for our architecture.

Rethinking our architecture to include Node.js

Our first foray into using Node.js was a one-off project for creating a lite experience for weather.com that was completely server-side rendered and had minimal JavaScript. The audience had limited bandwidth and minimal device capabilities (for example, low-end smartphones using Facebook’s Free Basics).

Stakeholders were happy with the lite experience, commenting on the nearly instantaneous page loads. Analyzing this proof-of-concept was important in determining our next steps in our architectural overhaul.

Differing from DNA, the lite experience:

  • Rendered pages as server side only
  • Kept the front-end footprint under 30KB (virtually no JavaScript, little CSS, few images).

We used what we learned with the lite experience to help us and serve our website more performantly. This started with rethinking our DNA architecture.

Metrics to measure success

Before we worked on a new architecture, we had to show our business that a re-architecture was needed. The first thing we had to determine was what to measuring to show success.

We consulted with the Google Ad team to understand how exactly a high-performing webpage impacts business results. Google showed us proof that improving page speed increases ad viewability which translates to revenue.

With that in hand, each day we conducted tests across a set of pages to measure:

  • Speed index
  • Time to first interaction
  • Bytes transferred
  • Time to first ad call

We used a variety of tools to collect our metrics: WebPageTest, Lighthouse, sitespeed.io.

As we compiled a list of these metrics, we were able to judge whether certain experiments were beneficial or not. We used our analysis to determine what needed to change in our architecture to make the site more successful.

While we intended to completely rewrite our DNA website, we acknowledged that we needed to stair step our approach for experimenting with a newer architecture. Using the above methodology, we created a beta page and A/B tested it to verify its success.

From Shark Tank to a beta of our architecture

Recognizing the performance of our original Node.js proof of concept, we held a “Shark Tank” session where we presented and defended different ideal architectures. We evaluated whole frameworks or combinations of libraries like Angular, React, Redux, Ember, lodash, and more.

From this experiment, we collectively agreed to move from our monolithic architecture to a Node.js backend and newer React frontend. Our timeline for this migration was between nine months to a year.

Ultimately, we decided to use a pattern of small JS libraries and tools, similar to that of a UNIX operating system’s tool chain of commands. This pattern gives us the flexibility to swap out one component from the whole application instead of having to refactor large amounts of code to include a new feature.

On the backend, we needed to decouple page creation and page serving. We kept Drupal as a CMS and created a way for documents to be published out to more scalable systems which can be read by other services. We followed the pattern of Backends for Frontends (BFF), which allowed us to decouple our page frontends and allow for more autonomy of our backend downstream systems. We use the documents published by the CMS to deliver pages with content (instead of the traditional method of the CMS monolith serving the pages).

Even though Drupal and PHP can render server-side, our developers were more familiar with JavaScript, so using Node.js to implement isomorphic (universal) rendering of the site increased our development velocity.

flow

Developing with Node.js was an easy focus shift for our previous front-end oriented developers. Since the majority of our developers had a primarily JavaScript background, we stayed away from solutions that revolved around separate server-side languages for rendering.

Over time, we implemented and evolved our usage from our first project. After developing our first few pages, we decided to move away from ExpressJS to Koa to use newer JS standards like async/await. We started with pure React but switched to React-like Inferno.js.

After evaluating many different build systems (gulp, grunt, browserify, systemjs, etc), we decided to use Webpack to facilitate our build process. We saw Webpack’s growing maturity in a fast-paced ecosystem, as well as the pitfalls of its competitors (or lack thereof).

Webpack solved our core issue of DNA’s JS aggregation and minification. With a centralized build process, we could build JS code using a standardized module system, take advantage of the npm ecosystem, and minify the bundles (all during the build process and not during runtime).

Moving from client-side to server-side rendering of the application increased our speed index and got information to the user faster. React helped us in this aspect of universal rendering–being able to share code on both the frontend and backend was crucial to us for server-side rendering and code reuse.

Our first launch of our beta page was a Single Page App (SPA). Traditionally, we had to render each page and location as a hit back to the origin server. With the SPA, we were able to reduce our hits back to the origin server and improve the speed of rendering the next view thanks to universal rendering.

The following image shows how much faster the webpage response was after the SPA was introduced.

flow

As our solution included more Node.js, we were able to take advantage of a lot of the tooling associated with a Node.js ecosystem, including ESLint for linting, Jest for testing, and eventually Yarn for package management.

Linting and testing, as well as a more refined CI/CD pipeline, helped reduce bugs in production. This led to a more mature and stable platform as a whole, higher engineering velocity, and increased developer happiness.

Changing deployment strategies

Recognizing our problems with our DNA deployments, we knew we needed a better solution for delivering code to infrastructure. With our DNA setup, we used a managed system to deploy Drupal. For our new solution, we decided to take advantage of newer, container-based deployment and infrastructure methodologies.

By moving to Docker and Kubernetes, we achieved many best practices:

  • Separating out disparate pages into different services reduces failures
  • Building stateless services allows for less complexity, ease of testing, and scalability
  • Builds are repeatable (Docker images ensure the right artifacts are deployed and consistent) Our Kubernetes deployment allowed us to be truly distributed across four regions and seven clusters, with dozens of services scaled from 3 to 100+ replicas running on 400+ worker nodes, all on IBM Cloud.

Addressing a familiar set of performance issues

After running a successful beta experiment, we continued down the path of migrating pages into our new architecture. Over time, some familiar issues cropped up:

  • Pages became heavier
  • Build times were slower
  • Developer velocity decreased

We had to evolve our architecture to address these issues.

Beta v2: Creating a more performant page

Our second evolution of the architecture was a renaissance (rebirth). We had to go back to the basics and revisit our lite experience and see why it was successful. We analyzed our performance issues and came to a conclusion that the SPA was becoming a performance bottleneck. Although SPA benefits second page visits, we came to an understanding that majority of our users visit the website and leave once they get their information.

We designed and built the solution without a SPA, but kept React hydration in order to keep code reuse across the server and client-side. We paid more attention to the tooling during development by ensuring that code coverage (the percentage of JS client code used vs delivered) was more efficient.

Removing the SPA overall was key to reducing build times as well. Since a page was no longer stitched together from a singular entry point, we split the Webpack builds so that individual pages can have their own set of JS and assets.

We were able to reduce our page weight even more compared to the Beta site. Reducing page weight had an overall impact on page load times. The graph below shows how speed index decreased.

flow Note: Some data was lost in between January through October of 2019.

This architecture is now our foundation for any and all pages on weather.com.

Conclusion

weather.com was not transformed overnight and it took a lot of work to get where we are today. Adding Node.js to our ecosystem required some amount of trial and error.

We achieved success by understanding our issues, collecting metrics, and implementing and then reimplementing solutions. Our journey was an evolution. Not only was it a change to our back end, but we had to be smarter on the front end to achieve the best performance. Changing our deployment strategy and infrastructure allowed us to achieve multiple best practices, reduce downtimes, and improve overall system stability. JavaScript being used in both the back end and front end improved developer velocity.

As we continue to architect, evolve, and expand our solution, we are always looking for ways to improve. Check out weather.com on your desktop, or for our newer/more performant version, check out our mobile web version on your mobile device.

Introducing OpenJS Foundation Open Office Hours

By Announcement, Blog, Office Hours

This piece was written by Joe Sepi, OpenJS Foundation Cross Project Council Chair

Kai Cataldo from ESlint during a recent Office Hours.
Kai Cataldo from ESlint during a recent Office Hours.

Earlier this year, to help our community better understand ways to participate, as well to provide hosted projects ways to showcase what they are working on, I started hosting bi-weekly Open Office Hours. 

The goal of Office Hours is to give members of our community a place to ask questions, get guidance on onboarding, and learn more about other projects in the Foundation. It has also served as a place for current projects to get connected to the wider OpenJS Foundation community and share key learnings. 

So far, we’ve had Wes Todd from Express Project, Alexandr Tovmach from Node.js i18n, Saulo Nunes talking through ways to contribute to Node.js and Kai Cataldo from ESlint. You can find all the previously recorded sessions and upcoming schedule at github.com/openjs-foundation/open-office-hours

Everyone is invited to attend.

How Can I Join?
These meetings will take place every other Thursday at 10 am PT, 12 pm CT, 1 pm ET and are scheduled on the OpenJS Public Calendar. Here’s the zoom link to join these sessions. 

While everyone is encouraged to join the call and the initiative, if you are unable to attend a session but would like to get more involved or have more questions, please open an issue in the repo.

Let’s do more in open source together!