Category

Blog

Electron Update: Community Discord Server and Hacktoberfest

By Blog, Electron, Project Update

This blog was originally posted on the Electron blog. Electron is an Impact Project at the OpenJS Foundation.

Community Discord Server and Hacktoberfest

Join us for community bonding and a month-long celebration of open-source.

Hacktoberfest and Discord banner

Electron Community Discord Launch

Electron’s Outreach Working Group is excited to announce the launch of our official community Discord server!

Why a new Discord server?

In its early days as the backbone of the Atom text editor, community discussion on the Electron framework occurred in a single channel in Atom’s Slack workspace. As time passed and the two projects were increasingly decoupled, the relevance of the Atom workspace to the Electron project decreased, and maintainer participation in the Slack channel declined in the same manner.

Up until now, we had still been redirecting our broader community to the Atom Slack workspace, even though we’ve had many reports from folks who have had trouble receiving invitations, and few of our core maintainers were frequenting the channel.

We’re setting up this shiny new server to be a central discussion hub for the community where you can get the latest news on all things Electron.

Get in here!

So far, the server’s membership consists of a few maintainers who have been working together to set it up, but we’re so excited to chat with you all! Come ask for help, keep up to date with Electron releases, or just hang out with other developers. We’ve got a handy invite for you that’ll give you access to the server!

Hacktoberfest 2020

As a large and long-running open-source project, Electron wouldn’t have been nearly as successful without all the contributions from its community, from code submissions to bug reports to documentation changes, and much more. That’s why we believe in the importance of participating in Hacktoberfest to usher in a wider community of developers of all skill levels into the project.

Odds and ends

This year, we don’t have a wider project to give you all to work on, but we’d like to focus on opportunities to contribute across the Electron JavaScript ecosystem.

Look out for issues tagged hacktoberfest across our various repositories, including the main electron/electron repository, the electron/electronjs.org website, electron/fiddle, and electron-userland/electron-forge!

P.S. If you’re feeling particularly adventurous, we also have a backlog of issues marked with help wanted tags if you’re looking for more of a challenge.

Stuck? Come chat with us!

Moreover, it’s also no coincidence that the grand opening of our Discord server coincides with the largest celebration of open-source software of the year. Check out the #hacktoberfest channel to ask for help on your Hacktoberfest PR. In case you missed it, here’s the invite link again!

Have feedback on this post? Let @electronjs know on Twitter.

Need help or found a bug? Contact us.

Node-RED Virtual Conference taking place October 10th

By Blog, Event, Node-RED

Node-RED Con Tokyo, a technology conference for all of the Node-RED users is taking place on October 10th 13:00 – 18:15 Japan Standard Time. Node-RED is a hosted project at the OpenJS Foundation. For more information and registration info check out the conference page: https://nodered.jp/noderedcon2020/index-en.html

A recent technology trend is “Low-code/No-code tools” which helps developers who have great ideas and eagerness create their own applications. Node-RED is a great example of a Low-code/No-code tool. NodeRED is suitable for industrial IoT, Web of Things, smart city projects, education, and prototyping. 

This conference features a great line up of speakers talking about their use cases and technologies using Node-RED. Talks will be in Japanese or English. The conference has been organized by Node-RED users groups Japan with input from community members around the globe including United Kingdom, Brazil, Indonesia, and Japan. 

Session Highlights

– “A mechanism to grow OSS Eco-System – Linux Foundation – OpenJS Foundation – Node-RED”, Noriaki Fukuyasu, Linux Foundation Japan

– “Looking to the future of Node-RED”, Nick O’Leary, IBM

– “DevOps with Node-RED. How to quickly turn an idea into a service.”, Masanori Usami, Uhuru

– “Node-RED suitable for education in the no-code era”, Wataru Yamazaki, Uhuru

– “Utilizing IoT interoperability and Node-RED based on the Web of Things standard”, Kunihiko Toumura, Hitachi, Ltd.

– And the other 15 great sessions!

More than 200 attendees have registered for the conference. Everyone can join the free event virtually via the web. We look forward to seeing you there, virtually! 

Register today!

OpenJS In Action: ESRI powering COVID-19 response with open source

By Blog, Case Study, Dojo, ESLint, Grunt, OpenJS In Action

The OpenJS In Action series features companies that use OpenJS Foundation projects to develop efficient, effective web technologies. 

Esri, a geographic information systems company, is using predictive models and interactive maps with JavaScript technologies to help the world better understand and respond to the recent COVID-19 pandemic. Recently, they have built tools that visualize how social distancing precautions can help reduce cases and the burden on healthcare systems. They have also helped institutions like Johns Hopkins create their own informational maps by providing a template app and resources to extend functionality. 

Esri uses OpenJS Foundation projects such as Dojo Toolkit, Grunt, ESLint and Intern to increase developer productivity and deliver high-quality applications that help the world fight back against the pandemic. 

Esri’s contributions to the COVID response effort and an explanation of how they created the underlying technologies are available at this video: 

https://youtu.be/KLnht-1F3Ao

Robin Ginn, Executive Director of the OpenJS Foundation, spoke with Kristian Ekenes, Product Engineer at Esri, to highlight the work his company has been doing. Esri normally creates mapping software, databases and tools to help businesses manage spatial data. However, Ekenes started work on a tool called Capacity Analysis when the COVID-19 pandemic began to spread. 

Capacity Analysis is a configurable app that allows organizations to display and interact with results from two scenarios predicting a hospital’s ability to meet the demand of COVID-19 patients given configurable parameters, such as the percentage of people following social distancing guidelines. Health experts can create two hypothetical scenarios using one of two models: Penn Medicine’s COVID-19 Hospital Impact Model for Epidemics (CHIME) or the CDC’s COVID-19Surge model. Then they can deploy their own version of Capacity Analysis to view how demand for hospital beds, ICU beds, and ventilators varies by time and geography in each scenario. This tool is used by governments worldwide to better predict how the pandemic will challenge specific areas.

During the interview, Ekenes spoke on the challenges that come with taking on ambitious projects like Capacity Analysis. Esri has both a large developer team and a diverse ecosystem of applications. This makes it difficult to maintain consistency in the API and SDKs deployed across desktop and mobile platforms. To overcome these challenges, Esri utilizes several OpenJS Foundation projects including Dojo Toolkit, Grunt, ESLint and Intern

Ekenes explained that Grunt and ESLint increase developer productivity by providing real-time feedback when writing code. The linter also standardizes work across developers by indicating when incorrect practices are being used. This reduces the number of pull requests between collaborators and saves time for the entire team. Intern allows developers to write testing modules and create high-quality apps by catching bugs early. In short, Esri helps ensure consistent and thoroughly tested applications by incorporating OpenJS Foundation projects into their work. 

From streaming to studio: The evolution of Node.js at Netflix

By Blog, Case Study, Node.js, Project Update

As platforms grow, so do their needs. However, the core infrastructure is often not designed to handle these new challenges as it was optimized for a relatively simple task. Netflix, a member of the OpenJS Foundation, had to overcome this challenge as it evolved from a massive web streaming service to a content production platform. Guilherme Hermeto, Senior Platform Engineer at Netflix, spearheaded efforts to restructure the Netflix Node.js infrastructure to handle new functions while preserving the stability of the application. In his talk below, he walks through his work and provides resources and tips for developers encountering similar problems.

Check out the full presentation 

Netflix initially used Node.js to enable high volume web streaming to over 182 million subscribers. Their three goals with this early infrastructure was to provide observability (metrics), debuggability (diagnostic tools) and availability (service registration). The result was the NodeQuark infrastructure. An application gateway authenticates and routes requests to the NodeQuark service, which then communicates with APIs and formats responses that are sent back to the client. With NodeQuark, Netflix also created a managed experience — teams could create custom API experiences for specific devices. This allows the Netflix app to run seamlessly on different devices. 

Beyond streaming

However, Netflix wanted to move beyond web streaming and into content production. This posed several challenges to the NodeQuark infrastructure and the development team. Web streaming requires relatively few applications, but serves a huge user base. On the other hand, a content production platform houses a large number of applications that serve a limited userbase. Furthermore, a content production app must have multiple levels of security for employees, partners and users. An additional issue is that development for content production is ideally fast paced while platform releases are slow, iterative processes intended to ensure application stability. Grouping these two processes together seems difficult, but the alternative is to spend unnecessary time and effort building a completely separate infrastructure. 

Hermeto decided that in order to solve Netflix’s problems, he would need to use self-contained modules. In other words, plugins! By transitioning to plugins, the Netflix team was able to separate the infrastructure’s functions while still retaining the ability to reuse code shared between web streaming and content production. Hermeto then took plugin architecture to the next step by creating application profiles. The application profile is simply a list of plugins required by an application. The profile reads in these specific plugins and then exports a loaded array. Therefore, the risk of a plugin built for content production breaking the streaming application was reduced. Additionally, by sectioning code out into smaller pieces, the Netflix team was able to remove moving parts from the core system, improving stability. 

Looking ahead

In the future, Hermeto wants to allow teams to create specific application profiles that they can give to customers. Additionally, Netflix may be able to switch from application versions to application profiles as the code breaks into smaller and smaller pieces. 

To finish his talk, Hermeto gave his personal recommendations for open source projects that are useful for observability and debuggability. Essentially, a starting point for building out your own production-level application!  

Personal recommendations for open source projects 

Metrics and alerts: 

Centralized Logging 

Distributed tracing 

Diagnostics 

Exception Management 

Node.js Package Maintenance: Bridging the gap between maintainers and consumers

By Blog, Node.js, Project Update

This blog was written by Michael Dawson with input from the Node.js package Maintenance Working Group. It was originally posted on the Node.js blog. Node.js is an OpenJS Foundation Impact Project.

Image for post

A while back I talked about the formation of the Node.js package maintenance Working Group and some of the initial steps that we had in mind in terms of helping to move the ecosystem forward. You can read up on that here if you’d like:
https://medium.com/@nodejs/call-to-action-accelerating-node-js-growth-e4862bee2919.

This blog is a call to action for package maintainers in order to help in one of our initiatives and to move it forward, we need your help.

It’s been almost 2 years and we’ve been working on a number of initiatives which you can learn more about through the issues in the package-maintenance repo. Things never move quite as fast as we’d like, but we are making progress on a number of different areas.

One area that we identified was:

Building and documenting guidance, tools and processes that businesses can use to identify packages on which they depend, and then to use this information to be able to build a business case that supports their organization and developers helping to maintain those packages.

We started by looking at how to close the gap between maintainers and consumers in terms of expectations. Mismatched expectations can often be a source of friction and by providing a way to communicate the level of support behind a package we believe we can:

  • help maintainers communicate the level of support they can/want to provide, which versions of Node.js they plan to support going forward, and the current level backing in place to help keep development of the package moving forward.
  • reduce potential friction due to mismatched expectations between module maintainers and consumers
  • help consumers better understand the packages they depend on so that they can improve their planning and manage risk.

In terms of managing risk we hope that by helping consumers better understand the key packages they depend on, it will encourage them to support these packages in one or more ways:

  • encouraging their employees to help with the ongoing maintenance
  • provide funding to the existing maintainers
  • support the Foundation the packages are part of (for example the OpenJS Foundation)

After discussion at one of the Node.js Collaborator Summits where there was good support for the concept, the team has worked to define some additional metadata in the package.json in order to allow maintainers to communicate this information.

The detailed specification for this data can be found in: https://github.com/nodejs/package-maintenance/blob/master/docs/PACKAGE-SUPPORT.md.

The TL/DR version is that it allows the maintainer to communicate:

  • target: the platform versions that the package maintainer aims to support. This is different from then existing engines field in that expresses a higher level intent like current LTS version for which the specific versions can change over time.
  • response: how quickly the maintainer chooses to, or is able to, respond to issues and contacts for that level of support
  • backing: how the project is supported, and how consumers can help support the project.

We completed the specification a while ago, but before asking maintainers to start adding the support information we wanted to provide some tooling to help validate that the information added was complete and valid. We’ve just finished the version of that tool which is called support.

The tool currently offers two commands which can be used which are:

  • show
  • validate

The show commands displays a simple tree of the packages for an application and the raw support information for those packages. Much more sophisticated commands will make sense to help consumers review/understand the support info but at this point it’s more important to start to have the information filled in, as that is needed before more sophisticated analysis makes sense.

The validate command helps maintainers validate that they’ve added/defined the support information correctly. If there are errors or omissions it will let the maintainer know so that as support information is added it is high quality and complete.

Our call to action is for package maintainers to:

  • Review the support specification and give us your feedback if you have suggestions/comments.
  • Add support info to your package
  • Use the support tool in order to validate the support info and give us feedback on the tool
  • Let us know when you’ve added support info so that we can keep track of how well we are doing in terms of the ecosystem supporting the initiative, as well as knowing which real-world packages we can use/reference when building additional functionality into the support tool.

We hope to see the ecosystem start to provide this information and look forward to seeing what tooling people (including the package-maintenance working group and others) come up with to help achieve the goals outlined.

Expedia Group: Building better testing pipelines with opensource.

By Blog, Case Study, ESLint, OpenJS In Action

The OpenJS In Action series features companies that use OpenJS Foundation projects to help develop efficient, effective web technologies. 

Software developers at global travel company Expedia Group are using JavaScript, ESLint and robust testing pipelines to reduce inconsistency and duplication in their code. Switching from Java and JSP to Node.js has streamlined development and design systems. Beyond that, Expedia developers are looking into creating a library of reusable design and data components for use across their many brands and pages. 

Robin Ginn, executive director of the OpenJS Foundation, interviewed Tiffany Le-Nguyen, Software Development Engineer at Expedia Group.

Expedia is an example of how adoption of new technologies and techniques can improve customer and developer experiences. 

A video featuring Expedia is available here: https://youtu.be/FDF6SgtEvYY

Robin Ginn, executive director of the OpenJS Foundation, interviewed Tiffany Le-Nguyen, Software Development Engineer at Expedia Group. Le-Nguyen explained how accessibility and performance concerns led developers to modernize Expedia’s infrastructure. One of the choices they made was to integrate ESLint into their testing pipeline to catch bugs and format input before content was pushed live. ESLint also proved to be a huge time-saver — it enforced development standards and warned developers when incorrect practices were being used. 

ESLint was especially useful for guiding new developers through JavaScript, Node.js and TypeScript. Expedia made the bold move to switch most of their applications from Java and JSP to Node.js and TypeScript. Le-Nguyen is now able to catch most errors and quickly push out new features by combining Node.js with Express and a robust testing pipeline. 

However, Expedia is used globally to book properties and dates for trips. Users reserve properties with different currencies across different time zones. This makes it difficult to track when a property was reserved and whether the correct amount was paid. Luckily, Expedia was able to utilize Globalize, an OpenJS project that provides number formatting and parsing, date and time formatting and currency formatting for languages across the world. Le-Nguyen was able to simplify currency tracking across continents by integrating Globalize into the project. 

To end the talk, Le-Nguyen suggested that web developers should take another look into UI testing. Modern testing tools have simplified the previously clunky process. Proper implementation of a good testing pipeline improves the developer experience and leads to a better end product for the user. 

messageformat is Working Hard to Make Themselves Obsolete

By Blog, In The News, messageformat, Project Update

This post originally appeared on DZone on September 1, 2020.

messageformat is an OpenJS Foundation project that handles both pluralization and gender in applications. It helps keep messages in human-friendly formats, and can be the basis for tone and accuracy that are critical for applications. Pluralization and gender are not a simple challenge, and deciding on which message format to implement can be pushed down the priority list as development teams make decisions on resources. However, this can lead to tougher transitions later on in the process with both technology and vendor lock-in playing a role. 

Quick note: The upstream spec is called ICU MessageFormat. ICU stands for International Components for Unicode: a set of portable libraries that are meant to make working with i18n easier for Java and C/C++ developers. If you’ve worked on a project with i18n/l10n, you may have used the ICU MessageFormat without knowing it. 

To find out more about messageformat, I spoke with Eemeli Aro, Software Developer at Vincit, and OpenJS Cross Project Council (CPC) member. Aro maintains the messageformat libraries, and actively participates in various efforts to improve JavaScript localization. Aro spoke on “The State of the Art in Localization” at last year’s Node+JS Interactive. Aro is an active participant in ECMA-402 processes, runs the monthly HelsinkiJS meetups, and helps organise React Finland conferences. 

How do formats deal with nuances in language? 

It’s all about choices. Variance, e.g. how the greeting used by a program could vary from one instance to the next, gets dealt with by having the messaging format that you’re using support the ability to have choices. So you can have some random number coming in and depending on the choice of that random number, you select one of a number of choices. This functionality isn’t directly built into ICU MessageFormat, but it’s very easily implementable in a way that gets you results. 

We need to decide how we deal with choices and whether you can have just a set number of different choice types. Is it a sort of generic function that you can define and then use? It’s an interesting question, but ICU MessageFormat doesn’t yet provide an easy, clear answer to that. But it provides a way of getting what you want. 

What are the biggest problems with messaging formats?

Perhaps the biggest problem is that while ICU MessageFormat is the closest we have to a standard, that doesn’t mean it is in standard use by everyone. There are a number of different other standards. There are various versions that are used by a number of tools and workflows and other processes in terms of localization. The biggest challenge is that when you have some kind of interface and you want to present some messages in that interface, there isn’t one clear solution that’s always the right one for you. 

And then it also becomes challenging because, for the most part, almost any solution that you end up with will solve most of the problems that you have. This is the scope in which it’s easy to get lock-in. Effectively, if you have a workflow that works with one standard or one set of tools or one format that you’re using, then you have some sort of limitation. Eventually, at some point, you will want to do something that your systems aren’t supporting. You can feel like it’s a big cost to change that system, and therefore you make do with what you have, and then you get a suboptimal workflow and a suboptimal result. Eventually, your interface and whole project may not work as well. 

It’s easy to look at messageformat and go, “That’s too complicated for us, let’s pick something simpler.” You end up being stuck with “that something simpler” for the duration of whatever it is that you’re working on. 

You’re forced to make a decision between two bad options. So the biggest challenge is it would be nice to have everyone agree that “this is the right thing to do” and do it from the start! (laughs) 

But of course that is never going to happen. When you start building an interface like that, you start with just having a JSON file with keys and messages. That will work for a long time, for a really great variety of interfaces, but it starts breaking at some point, and then you start fixing it, and then your fix has become your own custom bespoke localization system. 

Is technology lock-in a bigger problem than vendor lock-in? 

Technology lock-in is the largest challenge. Of course there is vendor lock-in as well, because there are plenty of companies offering their solutions and tools and systems for making all of this work and once you’ve started using them, you’re committed. Many of them use different standards than messageformat, their own custom ones. 

In the Unicode working group where I’m active, we are essentially talking about messageformat 2. How do we take the existing ICU MessageFormat specification and improve upon it? How do we make it easier to use? How do we make sure there’s better tooling around it? What sorts of features do we want to add or even remove from the language as we’re doing this work? 

messageformat, the library that I maintain and is an OpenJS project, is a JavaScript implementation of ICU MessageFormat. It tries to follow the specification as close as it can. 

Does using TypeScript help or hurt with localization? 

For the most part, it works pretty well. TypeScript brings in an interesting question of “How do you type these messages that you’re getting out of whatever system you’re using?” TypeScript itself doesn’t provide for plugins at the parser level, so you can’t define that. When there’s input in JavaScript, for example, for a specific file, then you can use specific tools for the different types that are coming out of it. Because messages aren’t usually one by one by one. You have messages in collections, so if you get one message out of a collection in JavaScript, you can make very safe assumptions about what the shape of that message is going to be. 

But of course in TypeScript you need to be much more clear about what the shape of that message is. And, if for whatever reason not everything is a string, then it gets complicated. 

It’s entirely manageable. You can use JavaScript tools for localization in a TypeScript environment, there are just these edge cases that could have better solutions than we currently have but work on those kind of requires some work on TypeScript behalf as well.

Should open source projects build their own solution for localization? 

I think this is one of those cases where it’s good to realize that this is JavaScript. If there’s a problem you can express briefly, you go look and you’ll find five competing solutions that are all valid in one way or another. Whatever your problem or issue is, it is highly likely that you will find someone else has already solved your problem for you, you just need to figure out how to adapt their solution to your exact problem. 

There are a number – like three or four – whole stacks of tooling for various environments for localization. And these are the sorts of things that you should be looking at, rather than writing your own. 

How is the OpenJS Foundation helping with localization?

Well, along with messageformat OpenJS hosts Globalize which utilizes the official Unicode CLDR JSON data. 

The greatest benefit that I or the messageformat project is currently getting from the OpenJS Foundation is that the Standards Working Group is quite active. And with their support, I’m actively participating in the Unicode Consortium working group I mentioned earlier where we are effectively developing the next version of the specification for messageformat. 

How far off is the next specification for messageformat?

It’s definitely a work in progress. We have regular monthly video calls and are making good progress otherwise. I would guess that we might get something in actual code maybe next year. But it may be actually longer than that for the messageformat to become standard and ready. 

How will localization be handled differently in 3-5 years? 

The messageformat working group didn’t start out under Unicode, it started out under ECMA-402. That whole work started from looking to see what we should do about adding support for messageformat to JavaScript. And this is one of the main expected benefits to come out of the Unicode messageformat working group. In the scope of 3-5 years, it is reasonable to assume that we are going to have something like intl.MessageFormat as a core component in JavaScript, which will be great! 

Effectively, this is also coming back to what the OpenJS Foundation is supporting. What I’m primarily trying to push with messageformat is to make the whole project obsolete! Right now we’re working on messageformat 3, which is a refactoring of some breaking changes. But hopefully a later version will be a polyfill for the actual Intl.MessageFormat functionality that will come out at some point. 

On a larger scale, it’s hard to predict how much non-textual interfaces are going to become a more active part of our lives. When you’re developing an application that uses an interface that isn’t textual, what considerations do you really need to bring in and how do you really design everything to work around that? When we’re talking about Google Assistant, Siri, Amazon Echo, their primary interface is really language, messages. Those need to be supported by some sort of backing structure. So can that be messageformat? 

Some of the people working on these systems are actively participating in the messageformat 2 specifications work. And through that, we are definitely keeping that within the scope of what we’re hoping to do. 

Try it out now

To install the core messageformat package, use:

npm install –save-dev messageformat@next

This includes the MessageFormat compiler and a runtime accessor class that provides a slightly nicer API for working with larger numbers of messages. More information: messageformat.github.io/messageformat/v3

AMP Advisory Committee 2020 election

By AMP, Blog

The following blog was originally posted on APM.dev. AMP is a growth project at the OpenJS Foundation. It was posted by Tobie Langel and Jory Burson on behalf of the AMP AC. 

AMP’s development is stewarded by an open governance system of multiple working groups and two committees: the Technical Steering Committee (TSC), which is responsible for the project’s direction and day-to-day operations, and the Advisory Committee (AC), which provide valuable perspective, stakeholder input, and advice to the TSC and to working groups.

An important goal of this governance structure is to ensure that the voices of those who do not contribute code to AMP, but are nonetheless impacted by it, get heard. This responsibility is shared across all contributors and governing bodies but falls particularly on the shoulders of the AC, which has a duty to ensure its membership is as representative and diverse as possible.

The AC is opening its call for nominations. We hope you will consider joining it. The AC is looking for candidates who will bring new perspectives and insight from AMP’s various constituencies: the industries adopting AMP to deliver content to their users, such as the publishing industry and e-commerce; the tech vendors whose solutions help power these industries, such as content delivery networks, web agencies, tooling providers, or payment providers; the industry experts focusing on topics such as accessibility, internationalization, performance, or standardization; and end-users and their representatives, such as consumer advocates. This list is by no means exhaustive. All candidates are welcomed and encouraged to apply, but preference will be given to those who bring an underrepresented perspective to the AC and help broaden its horizon.

For a better understanding of the AC’s work and membership responsibilities, read the member expectations document, working mode document, and minutes of past meetings. Also have a look at its GitHub repository and in particular its project board.

The AC is fully distributed—it spans over nine time zones—working asynchronously over email and GitHub issues (although its work isn’t technical in nature). It meets every other week over video conference, and ordinarily every six months face to face (note that there are no face-to-face meetings planned for the remainder of 2020 or the first half of 2021 at this time).

If you have questions, feel free to reach out to Tobie Langel or Jory Burson, AC facilitators.

If you’re interested in joining the AC, please apply.

We will be collecting new applications until Friday, October 2 at 23:59 AoE (Anywhere on Earth). The AC will elect its new members through a consensus based process and will announce the results on Monday, October 20.

How Node.js saved the U.S. Government $100K

By Blog, Case Study, Node.js, OpenJS World

The following blog is based on a talk given at the OpenJS Foundation’s annual OpenJS World event and covers solutions created with Node.js.

When someone proposes a complicated, expensive solution ask yourself: can it be done cheaper, better and/or faster? Last year, an external vendor wanted to charge $103,000 to create an interactive form and store the responses. Ryan Hillard, Systems Developer at the U.S. Small Business Administration, was brought in to create a less expensive, low-maintenance alternative to the vendor’s proposal. Hillard was able to create a solution using ~320 lines of code and $3000. In the talk below Hillard describes what the difficulties were and how his Node.js solution fixed the problem.

Last year, Hillard started work on a government’s case management system that received and processed feedback from external and internal users. Unfortunately, a recent upgrade and rigorous security measures prevented external users from leaving feedback. Hillard needed to create a secure interactive form and then store the data. However, the solution also needed to be cheap, easy to maintain and stable. 

Hillard decided to use three common services: Amazon Simple Storage Service (S3), Amazon Web Services (AWS) Lambda and Node.js. Together, these pieces provided a simple and versatile way to capture and then store response data. Maintenance is low because the servers are maintained by Amazon. Additionally, future developers can easily alter and improve the process as all three services/languages are commonly used. 

To end his talk, Hillard discussed the design and workflow processes that led him to his solution. He compares JavaScript to a giant toolkit with hundreds of libraries and dependencies — a tool for every purpose. However, this variety can be counterproductive as the complexity – and thus the management time – increases.

Developers should ask themselves how they can solve their problems without introducing anything new. In other words, size does matter — the smallest, simplest toolkit is the best!

How ESLint Helps Developers to Write Better Code

By AMA, Blog, ESLint

In the OpenJS Foundation “Ask Me Anything” (AMA) series, we get to hear from many inspiring leaders in the JavaScript community. We will highlight the key questions answered by the panel members and provide resources to help developers save time and improve their code. This month, we feature ESLint.

In the middle of a project it can be difficult to identify redundant or problematic sections of code. Frustrations compound when multiple developers, all using different styles, need to collaborate and write using a unified format. However, the aforementioned problems can be avoided with gentle reminders and automated feedback.

This is where linters come in. Linters are essentially spell checkers for code. They highlight syntax errors, refactoring opportunities and style/formatting issues as code is written. This helps developers to notice and fix small errors before they become a major problem. ESLint is an especially powerful tool that identifies problematic sections of code based on the user’s plugins, rules and configurations. Due to its flexibility, it has become an incredibly popular addition to many projects. Kai Cataldo, a maintainer of ESLint, and Brandon Mills, a member of the Technical Steering Committee, answered questions and explained how to get started with the OpenJS project in a recent AMA. 

In the talk, Cataldo clarified that ESLint is designed to help developers write the code they want. It should not tell them whether their code is “right” or “wrong” unless the error is code-breaking. The default settings of ESLint help developers to recognize syntax errors early on, but the tool can be made more powerful with the addition of user defined rules or downloadable configurations. Additionally, teams can standardize code across a team by defining a set of rules for the linter. Therefore, developers save time by writing consistent code that can be easily understood by other members. 

Cataldo and Mills also revealed future plans for ESLint — updated documentation, simplified configuration and parallel linting. They also discussed common problems of linters and how developers can contribute to the project to make ESLint even more powerful. 

Full AMA Replay

https://youtu.be/9BnJWfyZre4

You can find the full AMA broken up by section below: 

0:57 Member Introductions 

3:19 Background of ESLint

5:55 What is a linter?

11:42 Why ESLint is moving away from correcting styling “errors” 

13:32 Future plans for ESLint

21:57 Will you add HTML linting? 

26:15 Exciting features in newest release

27:35 What is the most controversial linting rule? 

29:39 How to get started

32:25 Why doesn’t ESLint have default configurations? 

36:44 How to contribute

For those interested in becoming involved in the projects please check out the following resources: