Klint Finley, Author at The GitHub Blog https://github.blog/author/klintron/ Updates, ideas, and inspiration from GitHub to help developers build and design software. Tue, 28 Jan 2025 00:23:54 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://github.blog/wp-content/uploads/2019/01/cropped-github-favicon-512.png?fit=32%2C32 Klint Finley, Author at The GitHub Blog https://github.blog/author/klintron/ 32 32 153214340 Open source AI is already finding its way into production https://github.blog/ai-and-ml/generative-ai/open-source-ai-is-already-finding-its-way-into-production/ Tue, 28 Jan 2025 17:00:38 +0000 https://github.blog/?p=82251 Open source AI models are in widespread use, enabling developers around the world to build custom AI solutions and host them where they choose.

The post Open source AI is already finding its way into production appeared first on The GitHub Blog.

]]>

Open source has long driven innovation and the adoption of cutting-edge technologies, from web interfaces to cloud-native computing. The same is true in the burgeoning field of open source artificial intelligence (AI). Open source AI models, and the tooling to build and use them, are multiplying, enabling developers around the world to build custom AI solutions and host them where they choose.

It’s happening faster than you might realize. In our survey of 2,000 enterprise respondents on software development teams across the US, Germany, India, and Brazil, nearly everyone said they had experimented with open source AI models at some point. The survey didn’t specifically ask about generative AI models and large language models (LLM), so these results could include other types of AI and machine learning models. Notably, we conducted our survey before the Open Source Institute published their definition of open source AI.

But the survey results suggest that the use of open source AI models is already surprisingly widespread—and this is expected to grow as more models proliferate and more use cases emerge. Let’s take a look at the rise of open source AI, from the increasing rise of smaller models to use cases in generative AI.

In this article we will:

  • Explore how and why companies are using open source AI models in production today.
  • Learn how open source is changing the way developers use AI.
  • Look ahead at how small, open source models might be used in the future.

Why use smaller, more open models?

Open, or at least less-proprietary, models like the DeepSeek models, Meta’s Llama models, or those from Mistral AI can generally be downloaded and run on your own devices and, depending on the license, you can study and change how they work. Many are trained on smaller, more focused data sets. These models are sometimes referred to as small language models (SLMs), and they’re beginning to rival the performance of LLMs in some scenarios.

There are a number of benefits of working with these smaller models, explains Head of GitHub Next, Idan Gazit. They cost less to run and can be run in more places, including end-user devices. But perhaps most importantly, they’re easier to customize.

While LLMs excel with general purpose chatbots that need to respond to a wide variety of questions, organizations tend to turn to smaller AI models when they need niche solutions, explains Hamel Husain, an AI consultant and former GitHub employee. For instance, with an open source LLM you can define a grammar and require that a model only outputs valid tokens according to that grammar.

“Open models aren’t always better, but the more narrow your task, the more open models will shine because you can fine tune that model and really differentiate them,” says Husain.

For example, an observability platform company hired Husain to help build a solution that could translate natural language into the company’s custom query language to make it easier for customers to craft queries without having to learn the ins-and-outs of the query language.

This was a narrow use case—they only needed to generate their own query language and no others, and they needed to ensure it produced valid syntax. “Their query language is not something that is prevalent as let’s say Python, so the model hadn’t seen many examples,” Husain says. “That made fine tuning more helpful than it would have been with a less esoteric topic.” The company also wanted to maintain control over all data handled by the LLM without having to work with a third party.

Husain ended up building a custom solution using the then-latest version of Mistral AI’s widely used open models. “I typically use popular models because they’ve generally been fine-tuned already and there’s usually a paved path towards implementing them,” he says.

Open source brings structure to the world of LLMs

One place you can see the rapid adoption of open source models is in tools designed to work with them. For example, Outlines is an increasingly popular tool for building custom LLM applications with both open source and proprietary models. It helps developers define structures for LLM outputs. You can use it, for example, to ensure an LLM outputs responses in JSON format. It was created in large part because of the need for finely tuned, task-specific AI applications.

At a previous job, Outlines co-creator and maintainer Rémi Louf needed to extract some information from a large collection of documents and export it in JSON format. He and his colleague Brandon Willard tried using general purpose LLMs like ChatGPT for the task, but they had trouble producing well-structured JSON outputs. Louf and Willard both had a background in compilers and interpreters, and noticed a similarity between building compilers and structuring the output of LLMs. They built Outlines to solve their own problems.

They posted the project to Hacker News and it took off quickly. “It turns out that a lot of other people were frustrated with not being able to use LLMs to output to a particular structure reliably,” Louf says. The team kept working on it, expanding its features and founding a startup. It now has more than 100 contributors and helped inspire OpenAI’s structured outputs feature.

“I can’t give names, but some very large companies are using Outlines in production,” Louf says.

What’s next

There are, of course, downsides to building custom solutions with open source models. One of the biggest is the need to invest time and resources into prompt construction. And, depending on your application, you may need to stand up and manage the underlying infrastructure as well. All of that requires more engineering resources than using an API.

“Sometimes organizations want more control over their infrastructure,” Husain says. “They want predictable costs and latency and are willing to make decisions about those tradeoffs themselves.”

While open source AI models might not be a good fit for every problem, it’s still the early days. As small models continue to improve, new possibilities emerge, from running models on local hardware to embedding custom LLMs within existing applications.

Fine-tuned small models can already outperform larger models for certain tasks. Gazit expects developers will combine different small, customized models together and use them to complete different tasks. For example, an application might route a prompt with a question about the best way to implement a database to one model, while routing a prompt for code completion to another. “The strengths of many Davids might be mightier than one Goliath,” he says.

In the meantime, large, proprietary models will also keep improving, and you can expect both large and small model development to feed off of each other. “In the near term, there will be another open source revolution,” Louf says. “Innovation often comes from people who are resource constrained.”

Ready to experiment with open source AI? GitHub Models offers a playground for both open source and proprietary models. You can use the public preview to prototype AI applications, conduct side-by-side comparisons, and more.

Get started with GitHub Models for free. Just pick a model and click >_ Playground to begin.

The post Open source AI is already finding its way into production appeared first on The GitHub Blog.

]]>
82251
How we evaluate AI models and LLMs for GitHub Copilot https://github.blog/ai-and-ml/generative-ai/how-we-evaluate-models-for-github-copilot/ Fri, 17 Jan 2025 18:00:03 +0000 https://github.blog/?p=82087 We share some of the GitHub Copilot team's experience evaluating AI models, with a focus on our offline evaluations—the tests we run before making any change to our production environment.

The post How we evaluate AI models and LLMs for GitHub Copilot appeared first on The GitHub Blog.

]]>

There are so many AI models to choose from these days. From the proprietary foundation models of OpenAI, Google, and Anthropic to the smaller, more open options from the likes of Meta and Mistral. It’s tempting to hop immediately to the latest models. But just because a model is newer doesn’t mean it will perform better for your use case.

We recently expanded the models available in GitHub Copilot by adding support for Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s o1-preview and o1-mini. While adding models to GitHub Copilot, performance, quality, and safety was always kept top of mind. In this article, we’ll share some of the GitHub Copilot team’s experience evaluating AI models, with a focus on our offline evaluations—the tests we run before making any change to our production environment. Hopefully our experience will help guide your own evaluations.

What is AI model evaluation?

AI models are systems that combine code, algorithms, and training data to simulate human intelligence in some way. GitHub Copilot gives users the choice of using a number of AI models trained on human language, which are known as large language models, or LLMs. OpenAI’s models are some of the most well-known LLMs due to ChatGPT’s popularity, though others such as Claude, Gemini, and Meta’s Llama models are increasingly popular as well.

AI model evaluation is the assessment of performance, quality, and safety of these models. Evaluations can include both automated tests and manual use. There are a number of frameworks and benchmarking tools available, but we generally build our own evaluation tools.

Automated tests enable evaluation at scale. You can see how well a model performs on a large number of tasks. The downside is that you tend to need some objective criteria to evaluate the output. Manual testing enables more subjective evaluations of output quality and accuracy, but is more time intensive. Combining the two approaches enables you to get a sense of the subjective quality of responses without relying entirely on anecdotal evidence or spending an enormous amount of time manually evaluating thousands upon thousands of responses.

Automating code quality tests

We run more than 4,000 offline tests, most of them as part of our automated CI pipeline. We also conduct live internal evaluations, similar to canary testing, where we switch a number of Hubbers to use a new model. We test all major changes to GitHub Copilot this way, not just potential new models.

We’ll focus here on our offline tests. For example, we evaluate potential models by their ability to evaluate and modify codebases. We have a collection of around 100 containerized repositories that have passed a battery of CI tests. We modify those repositories to fail the tests, and then see whether the model can modify the codebase to once again pass the failing tests.

We generate as many different scenarios as we can in different programming languages and frameworks. We’re also constantly increasing the number of tests we run, including using multiple different versions of the languages we support. This work takes a lot of time, but it’s the best way we have to evaluate a model’s quality.

Using AI to test AI

GitHub Copilot does more than generate code. Copilot Chat can answer questions about code, suggest various approaches to problem solving, and more. We have a collection of more than 1,000 technical questions we use to evaluate the quality of a model’s chat capabilities. Some of these are simple true-or-false questions that we can easily evaluate automatically. But for more complex questions, we use another LLM to check the answers provided by the model we’re evaluating to scale our efforts beyond what we could accomplish through manual testing.

We use a model with known good performance for these purposes to ensure consistent evaluations across our work. We also routinely audit the outputs of this LLM in evaluation scenarios to make sure it’s working correctly. It’s a challenge to ensure that the LLM we use for evaluating answers are aligned with human reviewers and perform consistently over many requests.

We also run these tests against our production models every day. If we see degradation, we do some auditing to find out why the models aren’t performing as well as they used to. Sometimes we need to make changes, for example modify some prompts, to get back to the quality level we expect.

Running the tests

One of the cool things about our setup is that we can test a new model without changing the product code. We have a proxy server built into our infrastructure that the code completion feature uses. We can easily change which API the proxy server calls for responses without any sort of change on the client side. This lets us rapidly iterate on new models without having to change any of the product code.

All of these tests run on our own custom platform built primarily with GitHub Actions. Results are then piped in and out of systems like Apache Kafka and Microsoft Azure, and we leverage a variety of dashboards to explore the data.

Making the call to adopt or not

The big question is what to do with the data we’ve collected. Sometimes a decision is straightforward, such as when a model does poorly across the board. But what if a model shows a substantial bump in acceptance rates, but also increases latency?

There are sometimes inverse relationships between metrics. Higher latency might actually lead to a higher acceptance rate because users might see fewer suggestions total.

GitHub’s goal is to create the best quality, most responsible AI coding assistant possible, and that guides the decision we have to make about which models to support within the product. But we couldn’t make them without the quality data our evaluation procedures provide. Hopefully we’ve given you a few ideas you can apply to your own use cases.

Build generative AI applications with GitHub Models

GitHub Models makes it simple to use, compare, experiment, and build with AI models from OpenAI, Cohere, Microsoft, Mistral, and more.

Start building now >

The post How we evaluate AI models and LLMs for GitHub Copilot appeared first on The GitHub Blog.

]]>
82087
How developers spend the time they save thanks to AI coding tools https://github.blog/ai-and-ml/generative-ai/how-developers-spend-the-time-they-save-thanks-to-ai-coding-tools/ Thu, 14 Nov 2024 17:32:20 +0000 https://github.blog/?p=81120 Developers tell us how GitHub Copilot and other AI coding tools are transforming their work and changing how they spend their days.

The post How developers spend the time they save thanks to AI coding tools appeared first on The GitHub Blog.

]]>

The arrival of AI coding tools is already changing the way developers work.

Nearly all respondents in our recent survey said they’ve used AI coding tools at some point—and the vast majority said these tools help make it easier to write higher-quality, and more secure, code. Similarly, the State of JS 2023 survey found that only 18% of respondents did not regularly use an AI coding tool.

Generative AI for the IT Pro co-author, Chrissy LeMaire, told us recently that LLMs are transforming her workflow. “When you start a project, you have to set up all sorts of things,” she said. “It takes a while before you get to the exciting parts. Now I let the LLM do it for me. With AI coding, it starts out exciting.”

GitHub research suggests AI tools can boost developer productivity by up to 55%—but what do you do with the time saved from using AI coding tools? According to our survey, respondents spend more time on system design, collaborating with colleagues, and learning new skills, among other things.

What’s more, AI tools make them better at what they do, creating positive feedback loops that help them not only write more code, but write better code as well. To find out what this looks like on the ground, we’ve been talking with developers about how AI is changing their workflows and what they’re doing with the time saved from using AI coding tools.

In this article, we’ll:

  • Hear from developers about how they use AI coding tools to save time.
  • Share a few tips on how to get started.

Less time debugging, more time planning

Many developers report the ability to spend more time on the designing and planning stages thanks to AI—and that means more time for being a system thinker, which is a net benefit. “I spend less time figuring things out through trial and error, and more time making sure my code is secure and performant,” open source developer, Claudio Wunder, told us in our recent Q&A on AI coding tools.

He’s not alone. In our survey, 40-47% of respondents say AI has enabled them to spend more time designing systems and customer solutions. Meanwhile, 37-43% of respondents said they spent more time refactoring and optimizing code.

In other words: Developers are spending their time making their code better, instead of just trying to make their code work.

The process starts before you even write your first line of code. For example, Wunder uses GitHub Copilot Chat to think through projects. This idea of explaining ideas to an inanimate object to clarify one’s own thoughts is called “rubber ducking.” But LLMs bring a new dimension to this activity: They can talk back. Instead of just putting his thoughts in order, Wunder actually gets feedback on his ideas and goes into new projects with a clearer idea of how he wants to structure his code. LeMaire finds herself doing much the same: “I spend less time doing grunt work and more time just talking.”

Here are some practical tips to get started designing systems and refactoring code with GitHub Copilot:

  • State your preferences. Wunder starts new projects by telling GitHub Copilot Chat he prefers ES6 built-ins and arrow functions. “These simple statements can usually help you achieve your desired code output and better understand Copilot’s thought process,” he explains.
  • Share examples. LeMaire approaches projects in much the same way she did without LLMs: By finding examples of similar things. “I upload the sample files, sometimes concatenating them into one big file, and tell the LLM what I’d like,” she says.
  • Start with a skeleton function. When it comes time to generate inline code, Wunder recommends starting with meaningful parameters, arguments, and comments that explain what the function should be and what each parameter should control.
  • Debugging as a conversation. Wunder keeps all the related code open in VS Code and starts a new Copilot Chat session with the prompt “Let’s debug some code.” He then asks Copilot questions, such as what it believes the code is doing and what would happen in response to different user inputs. “I try to provide as much context to Copilot about what the code is supposed to achieve and I keep iterating with follow-up questions until I find the problems and solutions,” he says.

Less time working on docs, more time working together

AI isn’t just for talking to machines. It frees up time for developers to talk with each other as well. 40-47% of our survey respondents say AI helps them spend more time collaborating with team members on projects. Another 39-45% said they spent more time on code reviews, which are one of the main ways developers collaborate and help each other produce better work.

LLMs can automatically generate code comments and documentation, making code easier to understand and, by extension, easier to contribute to. “I was able to go through some JavaScript and have an LLM generate JSDoc-formatted documentation based on function names and parameters with something like 95% accuracy,” LeMaire says. “My team really loved that.”

The upshot: Not only do developers have more time to work together, they can do so with more ease.

Here are some practical tips to get started when using LLMs to improve collaboration:

  • Use your favorite existing help text as examples. LeMaire recommends providing LLMs documentation examples in the style you want to replicate. For example, she prompts the LLMs with help text from her favorite PowerShell commands to help the LLMs generate documentation that matches the tone and format she prefers.
  • Leave comments in every file. Whenever you open a code file, add some comments to the top as a header to help GitHub Copilot better understand the code. “This will both accelerate your productivity, but also those of your team as you leave these little treats behind,” says GitHub developer advocate, Christopher Harrison. “A digital version of donuts in the break room.”

Less time searching, more time learning and experimenting

It’s important to keep up with the latest languages, databases, libraries, frameworks, and APIs, but it can be overwhelming. AI helps by giving you more time to keep up with cutting-edge technologies. In our survey, 43-47% of respondents said they spent more time on learning and development, while 44-46% said they spent more time on research and development and emerging technologies.

AI also aids in learning, providing real-time assistance as developers learn new skills. LeMaire recently transitioned from a career in DevOps to one in front-end development and has been using AI tools to work faster while deepening her knowledge of front-end technologies. “It made switching from writing mostly PowerShell and SQL to writing mostly JavaScript much less stressful,” she says. “Otherwise, I would have had to spend much more time context switching and looking things up.”

Similarly, DevOps architect, Alessio Fiorentino, has been using GitHub Copilot to learn Rust. “Rust is a powerful language that provides full control over the execution flow, but it has many nuances and requires a different way of thinking, especially for those who started with Python or JavaScript,” Fiorentino told us in a previous article. “AI assists me in navigating these complexities and ensures that I write efficient and idiomatic Rust code.”

AI coding tools can be helpful, but they still require a pilot—and they aren’t a substitute for learning. “Even if LLMs are able to generate entire applications in the future, you will need to evaluate the code,” Wunder says. He sees the role of the developer transforming as LLMs take care of implementation details and recommends that developers work on understanding higher-level computer science concepts, and sharpen their communication skills. Fortunately there’s some synergy there: You need to write clear instructions to use an LLM, so AI coding tools actually strengthen the skills developers of the future need to hone.

Here are some practical tips to get started when using GitHub Copilot as a learning tool:

  • Navigate a new or unfamiliar language or technology. Wunder recommends using GitHub Copilot to walk through the syntax and features of a given language. “I started learning Go recently and asked Copilot ‘What does adding a type after :=< on a variable definition do?’ It also helped me understand how namespacing and module definitions in Go work.”
  • Onboard with new code bases. Try highlighting a block of code and asking Copilot to explain it, or ask questions about the code, such as which variables relate to particular functionality.
  • Visualize what you’re learning. GitHub developer advocate, Kedasha Kerr, has used Copilot’s mermaid diagramming features to better understand how data flows through an application.

What’s next

In a remarkably short period, AI coding tools have become an essential part of the development stack, rapidly transforming how developers spend their time and approach their work. Software development is shifting towards design and collaboration, as opposed to squashing bugs. It’s still early, but AI is empowering developers to unlock their potential like never before. We’re excited to see what you build with it.

The post How developers spend the time they save thanks to AI coding tools appeared first on The GitHub Blog.

]]>
81120
Getting started with edge computing https://github.blog/developer-skills/application-development/getting-started-with-edge-computing/ Fri, 01 Sep 2023 15:00:01 +0000 https://github.blog/?p=73957 Edge computing practitioners answer your questions about when and why to build applications at the edge.

The post Getting started with edge computing appeared first on The GitHub Blog.

]]>
The Microsoft Azure cloud computing dictionary describes edge computing as a framework that “allows devices in remote locations to process data at the ‘edge’ of the network, either by the device or a local server. And when data needs to be processed in the central datacenter, only the most important data is transmitted, thereby minimizing latency.”

There’s quite a bit to unpack there. How does building edge computing software differ from writing other cloud applications, what do you need to know to get started, and does Microsoft’s definition hold up in the first place? The ReadME Project Senior Editor Klint Finley gathered three experts to answer these and other questions.

Let’s meet our experts:

Headshot photograph of Jerome Hardaway Jerome Hardaway is a senior software engineer at Microsoft, where he works in Industry Solutions Engineering. He’s also a U.S. Air Force veteran and the executive developer of Vets Who Code, a tuition-free, open source, coding-immersive non-profit that specializes in training veterans.

Headshot photograph of Kate Goldenring Kate Goldenring is co-chair of the Cloud Native Computing Foundation IoT Edge Working Group and a senior software engineer at Fermyon Technologies.

Headshot photograph of Alex Ellis Alex Ellis is the founder of OpenFaaS, a former CNCF ambassador, and creator of the Linux Foundation’s Introduction to Kubernetes on Edge with K3s course.

Klint: Let’s start by getting on the same page about what we’re talking about. I shared the Microsoft Azure cloud computing dictionary definition of edge computing. Does that definition work? Would you change anything about it?

Jerome: I would make the definition more human-centric. It’s not just about devices, it’s about the person. You want data processed and updated at the edge of the network as close to the person using it as possible, because, without a person to answer it, a cell phone is just a block of electricity.

Kate: I think it’s a good definition, given that it’s 12 words long. I would add more to it. When the CNCF IoT Edge Working Group was working to define edge computing, we found that definitions tend to fall into three main categories. The most common, and the one that Microsoft seems to be using, is geography-based—the distance between devices and servers, for example. The second is a resource-based definition, which prioritizes the resource constraints faced in edge computing. The third was connectivity-based.

Alex: Likewise, I’d change the definition to reflect how broad a topic edge computing can be. Just like with cloud computing, you can have two industry experts with a wealth of experience talking about two very different things when you ask them about edge computing.

Klint: I could see there being some confusion between edge computing and private or hybrid cloud, since all three typically involve some on-premises computing power. What are the main differences between edge computing architectures and more traditional architectures?

Jerome: A big part of the difference is about the intent, and that will affect how you architect your solution. Private and hybrid cloud computing is usually more about controlling where your data can go. For example, a healthcare company might need to make sure that patient data never leaves their premises. Edge computing is more about specific requirements, like the need to have an extremely responsive application, for example. Edge computing is about ensuring you have the right resources in the right places.

Kate: One way to think about it is that edge computing is a continuum that includes the downstream devices; upstream cloud resources, whether those are part of a public or private cloud; and whatever nodes you might have in between. You have to think about what sort of storage and computing resources you will have available at each point in the continuum. Network connectivity is a big constraint for much of what we talk about when we talk about edge computing.

Alex: You’re not always necessarily working around resource constraints in edge computing. Sometimes you might be working with rather capable devices and servers. But resources and environment are certainly something you have to consider when designing an edge computing solution in a way you might not have to for a more traditional scenario. For example, in a hybrid cloud scenario, you might be able to assume that devices will maintain a traditional TCP/IP network connection at all times. But what if you have a remote sensor powered by a battery that has to be changed manually? You might want to plan to have that sensor only intermittently connect to a network, rather than maintaining a constant connection, to conserve power and reduce the number of trips someone has to make to change the batteries. The device might only support a low-power wireless protocol to communicate with the intermediary device, so you’ll need to accommodate that as well.

Klint: What applications are NOT a good fit for edge computing?

Jerome: Adding more intermediaries between a device and a data store creates a bigger attack surface that you have to secure, so some industries, healthcare for example, will need to pay extra attention to the possible trade-offs. You have to think about the requirements and the benefits versus the challenges for your specific use case. You need to make sure you understand the business problems you’re trying to solve for your organization.

Alex: I don’t want to pigeonhole edge computing by saying it’s good for certain things and not others. It’s more about building an appropriate solution, which can differ greatly depending on the requirements. To continue with the healthcare example, building an edge computing system for medical devices will be quite different from building one for restaurant point-of-sale systems, which will be different from a solution for airlines. A POS device might run a full installation of Android and you might need to reboot it periodically to install updates. For medical devices, you’re probably going to want something like a real-time operating system that can run continuously without the need to take it offline to install updates.

Kate: It comes down to the characteristics of the application. If you have a highly stateful application that needs constant connectivity and lots of storage, then maybe you’d be better off running that application in the cloud. But you still might want to have some intermediaries near the edge to handle some lighter processing.

Klint: How portable between platforms do edge computing applications tend to be? Any tips on making them more portable?

Kate: It depends on what you mean by platform. Edge computing software tends not to be very portable between scenarios because of how customized it is to its circumstances. There are many different ways to connect to different devices, so there often needs to be a lot of custom logic to facilitate that. But one thing you can make consistent is what you do after ingesting data from your devices. You can focus on these elements to make things more portable.

Jerome: The more features of a platform you use, the less portable it is. To use an analogy, the more you use the built-in functionality of a framework from Ruby on Rails as opposed to implementing your own solutions, the harder it will be to move. But it’s also more work on your end. The tradeoff is that the more you leverage the technology, the more dependent you are on it. That’s not always bad but you need to be aware of it.

Alex: Again, it depends on what you’re running at the edge and what resources and capabilities are available. Embedded software for bespoke devices might not be very portable, but if your hardware can run a container or a virtual machine, your solution can be very portable.

Klint: What sorts of skills should developers learn to prepare to work in edge computing development?

Alex: I have a free course on Kubernetes and K3s at the edge. It has a list of associated skills that are useful in this space, such as MQTT, shell scripting, and Linux. Of course, what you need to learn depends on what sort of edge computing you will be doing. In some cases you might be making an otherwise traditional web or mobile application more responsive by putting resources closer to the user, but in others you might be working with industrial equipment or automobiles. Either way, Kubernetes isn’t a bad skill to have.

Jerome: Language-wise, I recommend Python, because you’ll be working with many different platforms and environments, and Python plays well with just about everything. It’s one of the most transferable technical skills you can learn. Edge computing is also one of the few areas where I recommend getting professional certifications for the technologies you use, because it showcases that you’re really taking the time to learn them. And as always, work on your communication skills, because you’re going to FUBAR a thing or two.

Kate: Edge computing is a really broad field. It’s also fairly new, so you’re not alone in figuring out what you need to learn. Learning about networking technologies and all the various protocols that devices use to communicate might be a good starting point. And you can always just get a Raspberry Pi and build something. Connect it to an edge computing platform and start learning the terminology. Have some fun, that’s the best way to learn.


Want to get The ReadME Project right in your inbox? Sign up for our newsletter to receive new stories, best practices and opinions developed for The ReadME Project, as well as great listens and reads from around the community.

The post Getting started with edge computing appeared first on The GitHub Blog.

]]>
73957
Elevating open source contributors to open source maintainers https://github.blog/open-source/maintainers/elevating-open-source-contributors-to-open-source-maintainers/ Thu, 01 Jun 2023 17:46:43 +0000 https://github.blog/?p=72156 Experts explain how to recruit and onboard co-maintainers.

The post Elevating open source contributors to open source maintainers appeared first on The GitHub Blog.

]]>
Maintaining an open source project is a lot of work. You have to write features, respond to issues, review pull requests, moderate community discussions, write documentation, and more. Bringing in co-maintainers can help prevent burnout by offering fresh perspectives and making a project more fun and collaborative. But giving administrative access to your repository is intimidating. It’s something you really only want to give to someone who is trustworthy, understands the vision for your project, and will stick around long enough to make a difference. Where do you find such people?

One time-honored way is to elevate your existing contributors into full-fledged maintainers. This month, The ReadME Project Senior Editor Klint Finley spoke with three experienced maintainers about how they identify, onboard, and collaborate with co-maintainers.

Headshot photograph of Carol Willing Carol Willing is the VP of Engineering at Noteable, a three-time Python Steering Council member, a Python Core Developer, PSF Fellow, and a Project Jupyter core contributor. She’s also a leader in open science and open source governance, and is driven to make open science accessible through open tools and learning materials.

Headshot photograph of Brandon Roberts Brandon Roberts is an open source software advocate. He is a Google Developer Expert for Angular, a maintainer of the NgRx project, creator of the Analog meta-framework, and Head of Engineering at OpenSauced.

Headshot photograph of Paulus Schoutsen Paulus Schoutsen is the founder of Home Assistant and Nabu Casa. His work revolves around building the “Open Home”: his vision of a smart home that offers privacy, choice, and durability.

Klint: How do you identify contributors that would be good co-maintainers? What are the signs of someone ready and willing to step up?

Carol: At Project Jupyter we run a regular cronjob to find active contributors. We regularly reach out to those folks and ask if they want a more active role. If they do, we coach them informally on how to be a maintainer, how to review pull requests, and things like that. We tend to be pretty liberal about giving people merge access. It’s more about trusting that someone will follow the project’s direction rather than trusting them not to make errors. It’s about how they engage and whether they will make the project better in the long term.

For a larger project like Python, we have a more formal structure. The first step is joining the triage team, which helps prioritize issues but doesn’t merge pull requests. We mentor active, engaged triage team members and eventually give many of them full merge permission. When someone is made a core contributor to Python, there’s usually a month-long process where the mentor oversees the work.

Brandon: I was a contributor to NgRx before I became a co-maintainer. I just jumped in and started looking at issues and solving problems. If you do that consistently over time, maintainers will see you as a solid contributor. I look for people that take this level of initiative. It also matters how you interact with other contributors or people who want to contribute. When I notice someone who is helping out with issues and actively investing time into a project, I invite them behind the curtain, so to speak, to give them more insight into the decision-making process and offer an opportunity to have more influence on the project.

Paulus: I used to be of the “give merge access early and often” school of thought, but Home Assistant is too big for that now. Still, even at our size, a social connection is important. Co-maintainers need to see you as a peer. For that, you need to create a community. We started with GitHub Issues, but that’s not really a tool for building community. Then we tried a mailing list, then a forum. What we found that worked best for us is chat. It’s a good place to meet contributors and see how they interact with the rest of the community. It gives you a chance to see not just the quality of their contributions but how they deal with people who have different opinions.

Klint: How do you encourage someone who would clearly be a good maintainer but is worried that they are not qualified?

Brandon: If they have reservations, I would point out that they’re already doing the work. That’s the whole reason I’m asking them.

People think the leap from contributor to maintainer is a grand thing, but you continue to do the same things you did before but as an official representative of the project. You also have more say in what happens and the direction we go.

Luckily, all the people I’ve asked to become maintainers have been excited to join the team. But sometimes people need a little more encouragement when you ask them to step into an unfamiliar domain. I tell people to trust the process and not count themselves out just because they don’t think they have the right experience or skill set.

Paulus: It hasn’t been a huge issue for us either. If they’re already an active contributor, they tend to know what to expect. Also, we have something like 100 maintainers. It would probably be more pressure if you were the third maintainer. You can invite people to a GitHub organization without giving them commit access, so that’s a low-pressure way to get someone more involved.

Carol: I see this fairly often, when contributors are worried they either don’t have the technical abilities to be a maintainer or think it will be too much responsibility. How I respond depends on their concerns. It starts with a conversation.

For people who are worried about the technical side, I emphasize that on a large project, no maintainer really understands the whole code base. I remind people that we all make mistakes and if something goes really wrong we can always revert changes.

As far as the responsibility and time commitment, like Brandon, I emphasize that people are already contributing to the project. Becoming a maintainer is less about committing more time to the project and more about committing to the project long-term. Of course, things come up in people’s lives and they need to take on a less active role and that’s fine. Project Jupyter has a “Red Team” of maintainers who are typically more active, and a “Blue Team” that is less active. Being able to switch between the two teams gives people permission to pause and contribute when they can without feeling like they’ve abandoned the project. It helps us avoid losing maintainers to burnout.

Klint: How do you onboard new maintainers? What do they need to know that they didn’t likely already know?

Brandon: We invite them into our private chat channels where we have more focused conversations. We share our internal documentation. There’s a fair amount of administrative work, adding them to things, sending them calendar invites. It’s a bit like onboarding someone to a new job.

Paulus: We have plenty of documentation about how we do things. We also use a lot of automation, like linters and formatters, which takes much of the pressure off manual reviews. No one wants to reject a pull request someone spent hours on because they missed a comma or something.

Carol: At Jupyter, we have a template that can be used by any of our groups. It covers the project’s history, its current direction, who to contact to get involved in different aspects of the project, and each maintainer’s current Red Team/Blue Team status. We call it the “Team Compass” because it helps keep people moving in the same direction.

Klint: How do you resolve disputes with other maintainers?

Brandon: We’ve had differences of opinion on things, but those conversations have been pretty chill. I wouldn’t call them disputes, we keep them conversational. Sometimes we punt things because they might cause us friction as maintainers. We often end up with more information that helps us make an informed decision and avoid heated discussions.

Carol: First and foremost, we try to keep it to the technical stuff and avoid getting personal. It’s important to model the behaviors you want to see from the rest of the community. I try to understand a person’s perspective. What’s driving them? The real trouble starts when you play favorites or make people feel like they’re not being heard. I look for context and try to build a win-win situation. We often think of things as binary, when in reality there is a spectrum of possibilities. Sometimes a decision is a one-way door, but a lot of times you say “Let’s move ahead this way, and then reevaluate in 30 days.” That lets you keep the conversation going without blocking the project.

Paulus: Most of the disputes we have are about project rules and processes. Step one is to make sure discussions are happening in the right place. A pull request isn’t the right place to have a conversation about changing our rules. It would be better to have the conversation on Discord, where it’s more visible. We document why different rules are in place and why past decisions were made so we can point to them when issues arise. Sometimes, showing someone the “why” is enough to settle a debate. After an initial discussion, we might start a formal process for changing a rule. Going through a process slows things down, which is actually good. It gives people a chance to cool off.

Klint: Have you ever had anything go so wrong that you had to remove another maintainer from a project?

Paulus: We had one incident many years ago. There was a good front-end contributor who had a short temper and would get angry with users just for requesting something. That doesn’t work. We tried to calm him down, but we eventually had to let him go. I think this was the only time we had a maintainer break our Code of Conduct. We haven’t had to ask someone to leave because of their technical contributions.

Brandon: Although we’ve never had to remove anyone from our team, there was an instance where we attempted to collaborate with an individual on a project, but it did not work out. Their ideas were good, but they weren’t a fit for the maturity level of our project. They moved forward and started a new project with those ideas and carved out their own space, which worked out better for everyone.

Carol: Unfortunately, yes, we have. Removing someone is the last resort. Usually, it’s because they’ve repeatedly crossed a line. We give people many chances to correct their behavior. Sometimes they do and there’s no further issue. But other times we see a pattern of increasingly bad behavior. You have to look at what the person is going through. Sometimes people just need a break from the project. But ultimately you can’t jeopardize the sustainability of the project for one person who is not acting professionally or in the project’s best interest. It’s not a fun process. You never want to say “Please leave,” but sometimes it’s better to say “Please leave” than have the whole project suffer and become a toxic environment.

Klint: Sometimes it’s hard to let go and let others step in. Do you have any tips or best practices for delegating tasks to other maintainers?

Brandon: I used to feel like I had to come up with all the ideas and execute them myself. But once you have other people on the team, you can share the ideas and let other people run with them if you don’t have the bandwidth. My fellow maintainers often have their own ideas about how to help out, and I encourage them to share them so others can do the same.

Paulus: I don’t write much code anymore, I mostly manage projects. We’re in an unusual position because we integrate with lots of different hardware. We have to deal with firmware and microcontrollers and build software bridges between different devices. It requires a lot of people with different areas of expertise. I couldn’t do it all even if I wanted to. The best successes we have usually come when I create small project teams. For example, I might say “Let’s do something about music,” and pull in people from different areas, and then help coordinate different activities. If you’re having trouble letting go, consider the areas where other people have more expertise and ask people to pitch in there.

Carol: If you’re worried about someone making mistakes, remember that everyone makes mistakes, including you. Don’t hold someone to a higher standard than you hold yourself to. Tasks you’ve been putting off are often good learning opportunities for other people. They can do the leg work, and you can supervise, it’s a win-win. I think it’s helpful to have processes and guidelines for best practices when triaging. That way, people know where to look for things to do, and understand what is expected of them.


Do you have a burning question about Git, GitHub, or open source software development? Drop your question to us on your social channel of choice using #askRMP, and it may be answered by a panel of experts in an upcoming newsletter or a future episode of The ReadME Podcast!

Want to get The ReadME Project right in your inbox? Sign up for our newsletter to receive new stories, best practices, and opinions developed for The ReadME Project, as well as great listens and reads from around the community.

The post Elevating open source contributors to open source maintainers appeared first on The GitHub Blog.

]]>
72156
After the offer: staying in tech long-term https://github.blog/developer-skills/career-growth/after-the-offer-staying-in-tech-long-term/ Thu, 29 Sep 2022 16:00:27 +0000 https://github.blog/?p=67322 Tech can be a tricky industry (to say the least). We talked with three tech professionals who share why they stay, what has helped them the most, and the power of switching things up.

The post After the offer: staying in tech long-term appeared first on The GitHub Blog.

]]>
There are lots of resources to help you land your first tech job, from technical training to interview preparation. But once you’re in, it’s harder to find advice on navigating the industry and staying fulfilled, leaving many tech workers feeling stuck.

In fact, research published by TalentLMS and Workable found that in 2022, 72% of IT workers were considering leaving their jobs. Among the most common complaints was limited career progression (41%). Most (75%) said their organizations focus more on attracting new talent instead of investing in existing staff. Meanwhile, as technology occupies more of our time, it’s easy to fantasize about moving on to careers that involve less screen time.

Many of the industry’s most pressing issues need to be solved by organizational leaders—or by society as a whole. Yet, despite these issues, tech is a uniquely rewarding industry. Many roles allow, or even encourage, remote work and flexible schedules. Opportunities abound to create solutions that impact the lives of vast numbers of people.

The ReadME Project senior editor Klint Finley assembled a panel to talk about why they stay in tech, how they’ve advanced their careers, the value of mentors, and more.

Headshot photograph of Nick DeJesus in front of a city skyline. He is smiling and has his hair slicked back into a ponytail. Nick DeJesus is a full-stack developer with a background in React, React Native, and Jamstack architectures. He’s currently the CTO of Black Tech Pipeline and the maintainer of the use-shopping-cart library. He’s also a professional video game competitor and participates in Tekken tournaments around the world.

Headshot photograph of Diana Liang, who is wearing a white tutrleneck sweater and rectangular eyeglasses Diana Liang is a technical writer. Previously she was a software engineer at Homesite Insurance and, before that, a full-time registered nurse.

Headshot photograph of Justin Samuels, a Black man who is wearing two chain necklaces and a jersey that says Render in pink script. Justin E. Samuels is the founder and CEO of the Render-Atlanta Conference (RenderATL) and a senior software engineer at Mailchimp.

Klint: What keeps you in tech? What is it you find in this field that you don’t find elsewhere?

Nick: I’m part of this organization called Resilient Coders, which helps bring BIPOC people into tech jobs. So one of the reasons I want to stay is to help other people find their way into the industry and make it better for all of us. The community aspect is really important to me and it helps keep me motivated.

Besides that, I love being able to take my ideas and turn them into reality. I love being able to use my coding skills to provide value to people through my projects. And then, of course, there’s the financial stability that the technology industry provides.

Diana: Financial stability is the big thing for me. I made more money as an entry-level software engineer than I did in my previous career as a nurse. It would take several more years of working on the hospital floor to make the same salary. Tech is stressful, but it’s not the sort of mental and physical stress I experienced in healthcare. I don’t want to discourage people from pursuing careers in nursing, because helping people heal was more satisfying work. But technology is satisfying in its own ways, and I’m much happier with my personal life now. I also feel like I have more options in tech than I did in healthcare. There are many different roles besides coding. I made the transition from engineering to technical writing, which is a better fit for me. So if you’re feeling like leaving the industry, maybe you just need to change the type of work you do.

Justin: What keeps me here are the technical challenges. It’s like coming to work every day and there’s a Rubik’s Cube that you have to solve by the end of the day. Then, when you’re done, someone switches up the cube so the solution will be different tomorrow. No two days are alike. There are always new, cutting-edge challenges to keep you busy. There’s a real sense of achievement. I’m proud that I have a steady income and have reached a point in my life where I don’t have to worry about my bank account, but if you’re only chasing money you won’t be happy in this line of work.

Klint: Can you talk about a goal or milestone you’ve achieved in your career and how you were able to reach it?

Justin: My first career goal was to become a senior engineer within three or four years. I did that by finding opportunities that weren’t outlined for me. I looked around and saw things that could be done to improve the organization and asked the leadership team if I could work on those things. That showed them that I was interested in more than a paycheck: I was interested in advancing the company. It was a lot of work. I came in early and stayed late. I under-promised and over-delivered. What it came down to was showing initiative and not being afraid to ask for opportunities.

Nick: Skill-building outside of work has been the most helpful thing for growing my career. I started out working in IT support, but I wanted to move up, so I went to a coding bootcamp and built my first app. I compete in professional tournaments for the video game Tekken, so I created an app that helps people study the game. That app attracted 30,000 users and that success helped me make the transition from support to engineering. But the company I started at went through a lot of instability due to an acquisition, which made it hard to progress. Most of my career advancement—and this is true for many people in tech—came from changing companies rather than moving up within a company. Side projects like use-shopping-cart help me learn and demonstrate skills beyond the ones I use at work and have helped me advance my career.

Diana: I suppose my most significant milestone, besides getting my first software engineering job, was the transition to technical writing. After a year as a developer, I felt stuck. I wasn’t as comfortable coding as I wanted to be, I wasn’t advancing my skills fast enough. So I did some research on Google on other careers I could pursue given the level of experience I had. There were a lot of possible roles, including project manager, business analyst, scrum master, or technical writer. Of those, I thought technical writing was a good fit for how I work. I’ve always liked to write. I did some writing as part of my software development work and always enjoyed that more than coding. I reached out to a few people who had transitioned from coding to technical writing. They all took different paths, but they seemed happier, and that gave me the push to pursue it.

Klint: How do you plan for the next stages of your career? Do you have a five-year plan, or just play it by ear?

Diana: A year ago I couldn’t have imagined that I would be a technical writer today. I don’t have a plan for the next five years, but I would like to continue as a technical writer.

Nick: I don’t have a specific timeline, but I have some big-picture goals. I would love to build something that can sustain itself without me, whether that’s by being acquired or by me passing the torch to another maintainer. Besides that, I’d like to be able to focus on solving problems I care about. I feel like that is everyone’s dream, to just work on the things you care about and not have to consider finances, but that’s where I’d like to be.

Justin: I did have a very specific timeline for becoming a senior engineer, but now I try not to get caught up in that sort of thinking. Giving yourself arbitrary deadlines is a good way to set yourself up for disappointment and it can stop you from thinking bigger. I don’t want to be tethered to a timeline. I want to see what life can be like if I strive to be the best human possible, through acts of kindness and by honing my craft. I find that by doing things I’m passionate about, like organizing Render ATL, my goals become more clear.

Klint: What role have mentors played in your career development?

Diana: It was essential for me to have a mentor. He gave me the push I needed. My mentor encouraged me to broaden my technical skills and deepen my knowledge, and that gave me more options in the field. Without him, I don’t know that I would have gotten the technical writing job. The interview wasn’t as hardcore as an interview for a software development role, but it’s still a technical interview and you need to demonstrate your knowledge.

Justin: I’ve had a lot of unofficial mentors—people I looked up to and who gave me good advice. I like to talk, and I’ve been told that people who talk a lot need to listen. If you are intentional about listening, then you can use the gift of gab to export what you’re learning from other people. So I don’t want to ever limit myself by having only one mentor. I learn from people of differing backgrounds, different genders, and different socioeconomic statuses. I think about where I would like to get to in my career and look for people who are already there. Then I follow them on social media and see if they have answers to my questions or know where to find the answers.

Nick: I’ve suffered from a lack of mentors in my career. It would have been constructive to have had someone in my corner when I was starting out. But like Justin, I talk to people I respect who have had successful careers and ask them questions. I interviewed at Netflix a few years ago, and even though I didn’t get the job, I still talk with the hiring manager. I wrote a blog post called “Don’t seek mentors, seek friends.” It’s much easier to ask people for help if they’re your friend.

Klint: One of the things I often hear from people thinking about leaving tech is that they want a job where they don’t have to look at a screen all day. It’s definitely something I struggle with myself. How do you manage your screen time?

Diana: I don’t manage it. Laughs. I’m not an outdoorsy person, and I like playing video games in my free time. I did get worried about how much time I was spending looking at screens when I worked as a software engineer because I spent a lot of time outside of work learning new skills. Combined with my leisure time playing video games, it really added up. Now that I’m a technical writer, I’m more likely to stop working at 5 o’clock, instead of staying up late learning React or something. And being remote, it’s easy to take a 15-minute break to walk around and stretch. There’s no pressure to sit around and look busy all the time.

Justin: I try to keep my average screen time on my iPhone under six hours daily, but I don’t worry too much about it. I think a lot of the worries about screen time are overblown. Our devices are how we connect with people. Since I own my own company, it is important that I am available to answer questions. That’s the reality of the highly connected world we’re in.

Nick: I am one with the screen. Laughs. I’ve been gaming since I was four-years-old, I like to imagine that I have more resilience than most people as far as that stuff goes. But there are periods when I don’t do any software development for months at a time. I’ll focus on writing for my blog instead of writing code. If I start feeling burned out I do minimize the screen time. No gaming, no coding. I’ll do things like go on hikes and eat with my family.

Klint: How do you know when it’s time to take a break, Nick?

Nick: I think one clear warning sign that people ignore is when you start thinking about what you’d rather be doing instead of what you’re supposed to do. When my heart rate increases thinking about the things that I don’t want to do, that’s a red flag. When I feel overwhelmed I start taking things off my plate. I’ll prioritize things I absolutely have to do and deprioritize the things that are more or less optional.

Klint: If you were to leave tech, what field do you think you would pursue?

Nick: I think I’d get into video production. I was a stand-in producer on a project once, and I think if I’d discovered production before software engineering I might have gone that route instead. It’s overwhelming and stressful, and there are so many things producers have to deal with that you can’t predict. But it’s also fun. There are many moving parts: the set, the camera crew, and on and on. You’re like a pilot trying to land them all safely. There’s a real sense of accomplishment when you finish a project.

Diana: I think I would try to do a 180 and do something to leverage my experience in both nursing and tech, but going back to nursing would be a last resort.

Justin: If I got out of tech, first I’d go on a trip around the world. Then I’d go live at the McMurdo research station in Antarctica and find some cool projects to work on there, to get myself closer to my childhood dream of living on every continent. Then I would focus on finding ways to give back. I think the highest level you can reach in life is to spend your time empowering people to reach their goals and be the best they can be.


The ReadME Project is a GitHub platform dedicated to highlighting the best from the open source software community—the people and tech behind projects you use every day. Sign up for a monthly newsletter to receive new stories, best practices and opinions developed for The ReadME Project, as well as great listens and reads from around the community.

The post After the offer: staying in tech long-term appeared first on The GitHub Blog.

]]>
67322
Keeping your skillset fresh as a developer https://github.blog/developer-skills/career-growth/keeping-your-skillset-fresh-as-a-developer/ Thu, 25 Aug 2022 17:00:22 +0000 https://github.blog/?p=66604 Whether you’re committing 30 minutes or 3 hours a day to learning, consistency is key. Klint Finley asks 3 tech professionals at different stages in their career for more advice.

The post Keeping your skillset fresh as a developer appeared first on The GitHub Blog.

]]>
If there’s one thing that everyone agrees on in the fast-changing world of software, it’s that developers and other tech professionals need to be constantly learning in order to keep their skills fresh and relevant. On the one hand, that’s easier than ever with the wealth of resources available, from books to online classes to YouTube tutorials. There are more programming languages, database systems, cloud platforms, and other technologies than anyone could ever learn. So how do you decide what to spend your precious time learning? And how much time should you commit to learning new technologies?

To answer your questions about keeping your skillset fresh, The ReadME Project’s senior editor Klint Finley gathered a panel of three tech professionals at different stages of their careers. Let’s meet our panelists:

Headshot photograph of Karhik Iyer Karthik Iyer is a software engineer at JPMorgan Chase. He graduated from IIT Roorkee in June of 2021. He’s been a Google Summer of Code candidate twice and loves anything related to software used in films and likes to read about computer graphics.

Headshot photograph of Monica PowellMonica Powell is a senior software engineer at Newsela, a GitHub Star, and founder of React Robins. She’s passionate about making open source more approachable, elevating people through technology, and cultivating communities.

Headshot photograph of Dan Kuebrich Dan Kuebrich is VP of platform engineering at FullStory. He previously spent a decade working on distributed tracing and observability as co-founder of Tracelytics. He likes games, making things, and biking around town with his family.

Klint: How do you decide what to invest your time into learning?

Monica: I’m guided by what I’m working on at the moment and what I’m trying to accomplish. In my first job, I was doing a lot of work with React, but I didn’t have much experience in it. I had to get familiar quickly, and I spent a lot of my own time rebuilding my personal website in React, going beyond what was required for me at work, and deepening my understanding of the library and its ecosystem. If there’s a job I’m interested in, I’ll research what types of technologies I need for that role.

When I’m not heads-down learning things for work, I have the bandwidth to be more exploratory and enjoy using my side projects to investigate cool new technologies that I want to examine that might not be immediately useful at work. Right now, for example, I’m looking into different 3D libraries. The concepts between different frameworks or cloud platforms are usually pretty transferable, so I focus on learning particular areas rather than get bogged down worrying about which specific tool to learn.

Karthik: I’m early in my career, so I feel like I still have quite a bit to learn. I tend to think big picture about areas of expertise, like networking or machine learning, rather than specific technologies. The networking concepts from Amazon Web Services will be broadly applicable to Microsoft Azure, as Monica said, so I try not to overthink which specific tools to learn and think more about the direction I’d like to take my career. Besides that, I follow a good set of more experienced people on Twitter and pay attention to what they’re learning and what tools they’re using. When I hear about something interesting, I Google it and maybe watch a few YouTube videos and then decide whether it’s something I want to dive into. You never know what skills are going to be useful in the future, and focusing on what interests me keeps me engaged.

Dan: As Karthik said, there are a set of broadly-applicable principles and skills. I’d also argue that the way to really learn a technology is to build something with it that someone can actually use and then maintain it over time. I try to focus on technologies I actually need to use today, to solve a real problem. When you’re learning something new for the sake of learning, make sure it’s actually something new: One MVC framework will likely be similar to the next one, just as relational databases tend to be similar to each other, and so on. But a functional programming language might be fairly different from an object-oriented one, and a statically typed language will be different from a dynamically typed one. You’ll learn more deeply by picking something truly different than what you’re used to.

Klint: Conversely, how do you decide NOT to learn something? What makes you scratch something off the list or move it to the “not right now” category?

Dan: I usually only reach for a new tool if there’s an overwhelmingly compelling reason to use it instead of what we’re already using. So if there’s no clear reason that the new thing is preferable, I say skip it and make your existing tools do the job. I’m a big advocate for having a small set of tools that you know well.

Monica: I agree. If something doesn’t solve the sorts of problems I’m working on, I’m not going to spend time learning it. It’s fun to learn for the sake of learning, but if I try to learn too many things at once I don’t get the depth I want. To narrow things down, I think about how the things I want to learn complement what I’m working on and my goals. For example, I recently decided not to spend much time on emerging serverless databases. I’ve done some exploration of different options, but they don’t really fit into the work I’m doing. For my personal projects, I’m able to leverage serverless functions instead of a database. It’s not that I don’t want to learn more about serverless databases eventually, but it’s just not something I need immediately.

Karthik: I try not to write things off without learning a little about them first. I know that machine learning is a field that is advancing rapidly. Still, when I started with the beginner resources in ML, I felt that, because of the mathematics involved, it wasn’t for me, so I ended up focusing on the data visualization side. But I gave ML a fair try first, and it’s something I might come back to. Just because others are doing it doesn’t mean that you have to do it if it doesn’t grab your interest. Although I try not to be dismissive, it doesn’t necessarily make a lot of sense to spend time on legacy technologies if you’re not actively using them at work. It’s worth paying attention to what sorts of platforms are being phased out.

Klint: What skills have you learned that ended up being the most transferable? What skills make learning other skills or technologies easier?

Karthik: The basics of computer science, like data structures and algorithms. I worked with C++ throughout university and now primarily work with Java, but the underlying principles were transferable. The design patterns and architectural principles of a Python application can be applied in Java. The principles of functional programming are applicable in Rust, even if you learned them through Haskell.

Dan: That’s exactly right. I suppose I can go a layer deeper, below computer science. When I was a kid, if we had a question at the dinner table, my dad would get the dictionary or an encyclopedia and answer the question. Of course, now you can just Google the answer to just about any question, but what that showed me was that with a little initiative, you can find the answers to your questions yourself. That principle helped me in college when I was studying operating systems. I had to read code and documentation to find the answers to my questions. And it serves me well to this day. If I have a question about some software, I start grepping. I find many of the questions I might have are answered in git logs or Slack logs somewhere. Sometimes it’s best just to ask someone who knows, but getting good at finding answers for yourself pays dividends: You’ll get faster at it and gain more intuition about when to search for an answer and when to ask for help.

Monica: For me, it would be accessibility. It’s applicable to any website. Learning to audit a website or product for accessibility issues is valuable regardless of the language or framework used. Relatedly, understanding web performance, which can be viewed as a subset of accessibility, is extremely useful. I want sites to load quickly, even on slow connections, so that users can access a site’s core functionality regardless of their internet speed. Both accessibility and performance are beneficial from job to job. The most important transferable skill, however, is learning how you learn most effectively. Once you better understand your learning style, you’ll learn more quickly and have more fun doing it. And as far as specific tools are concerned, learning Git has been extremely valuable and portable across roles.

Klint: How much time do you commit to learning? Where do you fit it into your schedule? Do you commit work hours “on the clock” to training or do you mostly learn during your free time?

Karthik: I don’t really have a set formula, but I definitely try to spend some time at work, say half an hour to read books or watch tutorials. But every day is different. If I’m doing something for myself, say for an open source project I’m working on, I’ll do that in my personal time after work. Apart from when my work schedule is very heavy, sparing half an hour a day to upskill is pretty doable.

Dan: My company has a program in engineering called “sharpen the saw time,” where folks are able to take advantage of up to four hours per week for learning, whether that’s reading a book or taking a course or something else. But the best way to learn is to have a job that pushes your boundaries. Then there’s little difference between time spent “on the job” and time spent doing the job because you’re almost always learning. In part, that’s down to finding the right job. The other part is about applying yourself. Sometimes that means thinking beyond your immediate responsibilities, as Monica did with React, and trying to understand additional layers of your organization’s tech stack. Maybe it’s looking at the little nuisances in your workflow to find ways to make it better. There are so many things we interact with in our day-to-day work that we could better understand. Applying a little curiosity to your current role will help you grow constantly, not just when you’re sitting down with a book or a tutorial.

Monica: Yes, I’ve done a lot of on-the-job learning, especially when starting out in a new role. How much time I commit to learning each day really varies. Sometimes I go into learning overdrive, like I did during my first software development job. Every day I learned something new at work. Outside of work, I build projects with the same sorts of tools I use during the work day so that I can understand how different parts of a web application fit together, even the parts I don’t contribute to at work. I build better mental models and better understand where my work fits into the bigger picture.

I like to have small, achievable goals. It’s easy to feel like you’re not making any progress when you’re only spending around 30 minutes at a time. I recommend the book Atomic Habits by James Clear. Last year I learned more about art by drawing something every day. It was hard to see the progress at first, but spending half an hour per day really adds up over time. It’s the same way with technical skills. The important thing is consistency.


The ReadME Project is a GitHub platform dedicated to highlighting the best from the open source software community—the people and tech behind projects you use every day. Sign up for a monthly newsletter to receive new stories, best practices and opinions developed for The ReadME Project, as well as great listens and reads from around the community.

The post Keeping your skillset fresh as a developer appeared first on The GitHub Blog.

]]>
66604
Marketing for maintainers: Promote your project to users and contributors https://github.blog/open-source/maintainers/marketing-for-maintainers-how-to-promote-your-project-to-both-users-and-contributors/ Thu, 28 Jul 2022 17:00:14 +0000 https://github.blog/?p=66247 Marketing your open source project can be intimidating, but three experts share their insider tips and tricks for how to get your hard work on the right people’s radars.

The post Marketing for maintainers: Promote your project to users and contributors appeared first on The GitHub Blog.

]]>
Congratulations: You’ve pushed your code to GitHub. But if you want other people to benefit from your hard work—or contribute back to the project—you need to get the word out. Not only that, but you need to explain why other people should take the time to learn and potentially contribute to your project.

It can be intimidating to go from writing code to writing marketing copy, so The ReadME Project’s senior editor Klint Finley gathered three experts to answer your questions about how to promote your project to both users and contributors. Let’s meet our panelists:

Segun Adebayo is a GitHub Star and software engineer at Vercel. He’s a design engineer and the creator of Chakra UI, an accessibility-focused React component library, and a collection of framework-agnostic UI component patterns called Zag.

Tasha Drew leads VMware’s xLabs, an innovation acceleration lab. She’s also co-chair for Kubernetes’ Working Group for Multi-tenancy, and the co-chair for Kubernetes’ special interest group (SIG) Usability. She likes to play Hearthstone, read novels, surf Twitter, and is a fan of puppies, kiddos, beaches, and picnics.

Aaron Francis is a marketing engineer at Tuple. He’s also an active contributor to the Laravel, Vue.js, and Tailwind ecosystems. He created the serverless Laravel deployment tool Sidecar and Airdrop, which aims to speed up deployment by skipping asset compilation whenever possible.

Klint: How did you spread the word about your project in the early days? Any tips on making something stand out?

Segun: After launching Chakra, I spent two or three months just talking about it and tweeting about it. I shared screenshots to show people what it could do for them. I created 10-20 second demos to show people how easy it is to use, how quickly you can go from zero to launch. That created curiosity and made people want to try it for themselves.

I think when you’re first starting out it’s really important to think about the problem your project seeks to solve. You need to let people know what tangible benefit your project provides, whether that’s helping them work faster or saving them money or something else. When you solve a meaningful problem, it makes an emotional impact. I talk about the idea of the “minimum viable audience,” which is generally just a few friends who can provide honest feedback on something either before it launches or soon after. They can let you know if you’re really solving a meaningful problem and how to make your project more helpful. Also, you should think about how you define your success. I’d say think less about npm download metrics or GitHub stars and think about the impact it makes over time.

Tasha: When you’re preparing to launch your open source project, it’s helpful to think about it through the lens of the people who you expect to adopt it. As Segun said, make sure you are solving a problem for them. If you expect enterprises to adopt it, think about what business problem you’re addressing. Knowing where you fit in the ecosystem is important.

One thing to consider is whether you’re a leader or a challenger. Kubernetes was backed by Google, but was still a challenger. They had to convince people to use Kubernetes instead of one of the existing solutions like Docker Swarm or Mesosphere. Now Kubernetes is a leader and Nomad is a challenger.

If you’re a challenger, you can probably afford to be more open, both in terms of accepting feedback and in terms of how flexible and extensible your software is. Or you can think about a niche problem, and solve it significantly better than the leader. What you want is for people to have a reason to try you.

Aaron: Solving a niche problem worked well for me when I launched Sidecar. It’s a general-purpose tool and it can be hard to get people to use those. To get the word out in the early days, I picked a very specific use case that Sidecar could solve really well. Basically, I provided an easy way to use Inertia.js to do server-side rendering with Laraval’s Vapor service. That got people in the Laravel and Inertia.js communities talking about Sidecar. Addressing the problems of existing communities is a powerful way to get noticed. You have to advocate for yourself, you have to reach out to podcasts, meetups, and conferences. You have to tweet at people and let them know that you have a solution to a problem they’ve mentioned having. You shouldn’t feel icky about it. You put a lot of time into making something helpful.

Besides that, documentation is crucial. I will say if you want people to use your project, you need to make it as easy to adopt as possible, and the best way to do that is through documentation. Write as much of it as you can stand to write. It lowers the barrier to entry for your project and can make the code itself better. For example, if you find it’s hard to document a particular feature, that’s probably a sign that it’s too complicated and you need to simplify it.

Segun: And don’t stop with written documentation. Video is really helpful for a lot of people. People learn in different ways so it’s important to provide different types of content.

Klint: What are some common mistakes people make when trying to communicate the value of their open source project?

Tasha: Sometimes you’re so excited about what you built that you over-fixate on how clever you were. For example, I’ve seen project landing pages with links to distributed systems papers and information on how they implemented a particular protocol. That sort of information is only interesting to other distributed system builders, not to users. You need to think about what problem you’re solving, not give a dissertation on the technology you used. What’s the message you want people to take away from your webpage or your README? It’s probably not related to the theory behind the code.

Segun: Yes, one of the biggest mistakes I see is the use of too much technical terminology on a page, focusing too much on the features rather than the tangible benefits it provides to users. It’s best to talk about the “why” or motivation behind the library, then show your users what benefits they stand to gain by using your library. And doing all that in as simple a language as possible.

Klint: But if what you’re working on is a tool for developers, or if you’re trying to attract contributors, does it make sense to focus more on the underlying technologies used?

Segun: That helps, but it’s important to note that people who want to contribute to your project have varying levels of skills and expertise in the technologies you’ve used. Some might be interested in contributing to documentation, others might want help with triaging, fixing bugs, etc. That said, ensuring your project is easy to contribute to by providing clear contribution guidelines, creating walk-through videos, and reducing the “setup” processes go a long way. Documenting the roadmap, currently known bugs, and even asking for help in certain areas will help attract contributors.

Tasha: The other thing to remember is that you probably want to attract non-technical contributors, and contributors of different technical skill levels. To go back to my earlier example: If you have a sophisticated distributed systems project, you shouldn’t expect to get a lot of people with deep distributed systems experience to contribute to your project right away. That’s probably going to be a big commitment of time and effort and in the lifespan of your project, you’ll probably only ever get a few people who will contribute to those parts. Most of your contributors will be pitching in other places, where the barrier to entry is lower. You want to make your language and project easy to understand so that people of various technical skill levels will be interested.

Aaron: I think as developers, we become attached to the things we build. That’s not a bad thing. But you shouldn’t become so invested in the thing you’ve built that you insulate yourself from feedback. It’s easy to reflexively dismiss criticism and think “that person just doesn’t get it” or “they’re just not smart enough.” There are trolls out there who will just say negative things to get a rise out of you, but there are also a lot of well-meaning people who just want to understand what you’re building or learn how to use it.

You have to advocate for yourself, you have to reach out to podcasts, meetups, and conferences. You have to tweet at people and let them know that you have a solution to a problem they’ve mentioned having. You shouldn’t feel icky about it. You put a lot of time into making something helpful.

- Aaron Francis, maintainer of Sidecar and Airdrop

Klint: What can maintainers do to attract more contributors in the early days of their project, when it’s too early to say “Hey, this is a big project, you can put it on your resume”?

Tasha: You will get more contributors if you get organic user growth. If people see it as something useful, if they see other people using it, they’ll want to contribute. The more valuable your project is, the more people will want to get involved, whether that’s because they love the project or because they see contributing to it as a way to advance their careers.

Also, you want to make it technically easy to contribute. You can lower the barrier to entry by designing your software for extensibility. Take Chef for example. People can contribute Cookbooks to the Chef Supermarket without having to understand the inner workings of the core software. Your software will be more useful if you let people build their own extensions without having to get a pull request approved, and the community will grow faster.

Aaron: The big thing is to be responsive, especially to the first few people who are taking a risk by investing time in your unproven project. Time is finite, we only get one life, so value those people who are willing to spend some of their precious resource on you. That applies not just to people sending pull requests, but to people pointing out problems or making suggestions on social media as well. Thank people for pointing out problems, thank people for making suggestions even if you don’t plan to implement them right away, and acknowledge parts of the software that might be painful right now. The situation is super different when you’re the size of, say, Tailwind, and you’re getting hundreds of issues per day. But if you’re starting out and looking to get traction, you need to be kind, responsive, and helpful. Kindness is key. You can’t always be responsive or helpful but you can always be kind.

Segun: I agree—you have to be responsive, kind, and helpful. Knowing that people are willing to spend their time improving your project and pushing your vision forward is a good perspective to have. Don’t be rude or mean to people regardless of their comments or feedback. You also need to get comfortable interacting with strangers, whether it’s via text or video chat. Be ready to jump on a Zoom call with someone you’ve never met before to help walk them through an issue. It might be intimidating at first to interact with people you don’t know, but you have to do it if you want to grow. This is a sure way to meet new people and make new friends that might be helpful to you in the future.

Having some sort of contribution guidelines with all the information they need to make their first contribution is critical. Making that first pull request should be as frictionless as possible because they’re more likely to stay if things go smoothly the first time.

Klint: Do you feel like you’ve developed a “personal brand” or personal communication style? If so, how have you cultivated it?

Aaron: I think I have a personal communication style and you could also call it a brand. I try to be kind, empathetic, sensitive, and confident. That’s how I try to show up online. When someone interacts with me, I try to be kind, even if I don’t agree with them. I think in engineering circles being kind has been sacrificed at the altar of being correct, and I think it’s a huge false dichotomy. That’s particularly important if you’re trying to get enterprise adoption or sponsorship for your project. Large organizations are not going to want to interact with people who have bad public communication hygiene or bad public personas.

Segun: I think of personal branding as what people expect of you when you step into a room. What do you want to be known for? People build a mental model in their mind of what kind of person you are based on what they see: the talks you give, how you behave on social media, and how you treat others online. I’ve picked an area of expertise—design systems, state machines, and accessibility—and I do my best to share helpful tools and ideas I learn freely with others. I think this has earned me a reputation that I intend to uphold over time. The biggest thing I try to do is to be open and vulnerable. I admit what I don’t know. I try to learn from other people’s perspectives. I think really listening to people, one-on-one if possible, helps create a positive impression.

Tasha: I don’t think I have a personal brand. I don’t think I’ve been single-minded enough about curating my Twitter content, for example. But my communication style is upfront, consistent, and transparent. When I talk to you, I will give honest feedback. The way I developed it, or at least became aware of it, is that I used to get the feeling that I needed to be less direct or up front with people. I mentioned that to an executive coach who said that rather than try to change, I should lead with that style and say: “Our value is clear, transparent communication and fast feedback.” I really responded to that. People respond well to up-front feedback when they’re expecting it, when you’re clear from the beginning that that’s what you’re going to give them, and when you’re open and comfortable with receiving it yourself.

Klint: What role does speaking at events play in promoting projects?

Aaron: I can only speak to my own experience, but I’ve found speaking at events to be pretty important. Putting yourself out there publicly is scary, but it’s how people will find out about your work. The first talk I did for Sidecar was at a meetup. There were between 80 and 100 people watching live and I thought, “This is my big break! There are 80 people watching, I’ve made it.” It didn’t make Sidecar an overnight success, but four or five more people started using it—four or five more people who could potentially spread the word to other people. Each event you do, or each podcast you do, makes it more likely you’ll be invited to another event or interview, so you can start building momentum that way.

Tasha: Let’s look at Docker as an example. People think of it as an overnight success, but it wasn’t. The company behind Docker sent evangelists to what felt like every tech meetup, worldwide, for at least two years, to tell people about Docker and why it was interesting. People underestimate the power of focused evangelism. I think speaking with people is very human, and we tend to be more willing to try technologies if we meet someone. That said, you need to think about the actual time investment for any given event and whether your talk is a good fit for the audience.

Segun: It is important, but you need to find a healthy balance between maintaining and evangelizing the project at events. Promoting at events increases your opportunities and your project’s visibility in the community. It’s one of the most effective ways to get developers interested or excited about your project. The balance to this is the timeline between shipping your project and promoting it at an event. In my mind, after you’ve shipped a release is generally a good time to do talks and podcast interviews. But if you’re in the middle of a release, maybe you should focus on that. It’s probably best to talk about your project when it’s in a stable place, not when it’s changing rapidly. I think it’s important to have a team that can help give talks as well, especially if you don’t enjoy giving talks. If you can build a team of collaborators, there will be people who are naturally excited about evangelizing the project in conferences, podcasts, and speaking publicly about it. You have to take breaks. Having a team helps give you the space to do that.

Do you have a burning question about Git, GitHub, or open source software development?

Drop your question to us on your social channel of choice using #askRMP, and it may be answered by a panel of experts in an upcoming newsletter or a future episode of The ReadME Podcast!


The ReadME Project is a GitHub platform dedicated to highlighting the best from the open source software community—the people and tech behind projects you use every day. Sign up for a monthly newsletter to receive new stories, best practices and opinions developed for The ReadME Project, as well as great listens and reads from around the community.

The post Marketing for maintainers: Promote your project to users and contributors appeared first on The GitHub Blog.

]]>
66247
What to do when your open source project becomes a community? https://github.blog/open-source/maintainers/what-to-do-when-your-open-source-project-becomes-a-community/ Thu, 30 Jun 2022 15:11:39 +0000 https://github.blog/?p=65939 Maintainers answer your questions about how to manage an open source project that grows into a community.

The post What to do when your open source project becomes a community? appeared first on The GitHub Blog.

]]>
Many an open source project is created to scratch an individual developer’s itch. But when other people contribute to—and depend on—a project, it stops being just about the original creator or creators’ own needs. As some projects grow, so does interest and volume of opinions. They become communities. When this happens, maintainers often find themselves having to draw-on or grow new skills, switching suddenly from simply coding to tasks like traffic control, governance, and project management. This means a maintainer must take an expanded approach to their work.

To answer your questions about how to adapt from maintaining projects to maintaining communities, The ReadME Project’s senior editor Klint Finley gathered a group of open source developers who have done just that. Let’s meet our panelists:

Chrissy LeMaire is a PowerShell developer, author, SQL Server expert, and the creator and maintainer of dbatools, the popular PowerShell module for SQL Server professionals. Originally from Louisiana, she currently works and lives in Europe.

Fred Schott is the co-creator of Astro, a static site builder that helps developers build faster websites by shipping less JavaScript. He now lives in Oakland, often writes about his journey through the tech world, and is currently trying to adopt a dog.

Jem Gillam co-maintains Graphile, a suite of powerful open source tools to help developers build the backend of web and mobile applications. Graphile’s flagship project is PostGraphile, a tool for creating well-structured and standards-compliant GraphQL APIs.

Klint: When you started your projects, did you have the community in mind from the beginning? Or were you trying to build something useful, with the community aspect coming later?

Chrissy: I didn’t start dbatools with the idea of building a community. I’d started communities before and I’d started open source projects before, but I hadn’t thought to mix them. But I did know that I wanted a team. I’m not an expert at SQL Server, so I wanted others involved to make it a fun and useful tool. The community emerged organically.

Fred: When we started Astro we were coming at it to scratch our own itch. But we’d seen communities grow, so from the get-go, we knew the importance of community. It was one of the first things we focused on once we had a minimum viable product (MVP) together. We started a Discord, started getting documentation and policies in order, and it took off from there.

Jem: My partner Benjie Gillam and I came to PostGraphile after it had already been around for a while and ended up becoming the maintainers after the previous maintainer changed jobs and moved on to other things. We’d had quite a lot of community experience before this, for example, founding a maker space in Southampton, but we didn’t realize we were building a community when we got involved in PostGraphile.

Klint: When did you realize you had a community and not just users?

Jem: I think it was when we had people reaching out wanting to sponsor the project. We started out using Gitter for chat, but we outgrew it and moved to Discord so we could add more structure to our discussions. We started doing more intentional community management, but it really felt like a community when we realized that people were serious enough about Graphile to want to put money into it to ensure its continued existence.

Especially when you’re first starting out, you don’t know who’s using your software. You only hear from people when they have problems, when they’re filing issues or looking for support. So it was really validating to see how much people really cared about what we were doing.

Chrissy: dbatools generated a lot of excitement early on because it simplified incredibly tedious tasks. Within a month of starting the Slack channel, I knew we had a community. But it wasn’t until six months later that I knew we’d built a community that would last for years. At the six-month mark, we all got together and used a kanban board to standardize our coding techniques. It brought so many of us together and resulted in a higher-quality project we could all really feel proud about.

Fred: I don’t know what point it was for us. Just seeing the first couple of people find their way to our Discord was a huge moment. It was super early days, but they were excited. They saw the vision we were building towards. There was an immediate feeling that we’d found our people, so to speak: people who cared as much about what we were doing as we did.

Klint: Beyond organic growth, can you share any proactive steps you’ve taken to grow your community and attract more participants?

Fred: We weren’t very proactive at first. We didn’t want to grow too fast. We already had an audience from projects we’d done before, so we were lucky enough not to have to go out of our way to promote Astro. But, one thing we’ve done recently is that we’ve changed our expectations for core maintainers. 

Our documentation used to assume that core contributors were technical contributors, so we had some technical requirements. But people contribute in lots of different ways, from documentation to community support, so we’ve updated our expectations accordingly. 

Jem: We started off using f5bot to keep an eye on our name cropping up on different social media channels, which let us meet our users where they were already discussing us. Once we had a Discord server we started pointing people to that. The outreach flows rather naturally from the way we do things. Rather than writing out a strategy, it’s about how we want to interact with the developer community and doing things the way we’d want to see them done.

Chrissy: Yes, exactly. I thought about the things I personally value in a community. I go to conferences and let people know that we’re looking for contributors and that we’ll walk anyone through submitting their first pull request. I entice people by letting them know they can safely cut their teeth with us, then go on to contribute to things like Microsoft documentation. I talk about the benefits of open source and adding the skill set that comes with it to their resume. In the early days, I even set up a company on LinkedIn and let people know they were welcome to add themselves if they had contributed something major. I liked the LinkedIn setup because I could also recommend major contributors who were looking for jobs, and I’ve been told it helped a couple of people find their next opportunity.

Klint: Supporting the needs of a group of people is always going have its challenges. How do you make your community welcoming in terms of skill level, diversity and inclusion, and all-around friendliness?

Chrissy: When I started out in open source, it wasn’t always a nice place. Exclusion was pretty much the standard, but I always thought it was cooler to be welcoming to others. So inclusivity was a goal from the very beginning. As a gay woman in tech, inclusion has always been of particular importance to me. As our community has grown, kindness and inclusion has been emphasized while we’ve managed to keep bad behavior firmly kept at bay.

I also wanted people of all skill levels to feel welcomed within our project. dbatools is a command-line tool and I do recognize that the command line can be intimidating. So I try to address that by hopping on Twitch and showing people that I also use GUI tools, and that even as a Microsoft Most Valuable Professional and GitHub Star, I still struggle or make mistakes. I show people that it’s OK not to know everything and that they don’t have to use the most intimidating or complicated tools to be effective.

Jem: We’ve had a code of conduct since the beginning. But really our main approach is leading by example. We try to make everyone welcome regardless of who they are and their experience. We found what works is to be non-judgmental. It’s very easy for people to think they have a silly question, but we encourage people to ask them anyway because if one person has a particular question other people probably do as well. So we try to answer in a kind way or point to any documentation they might need. We’re getting more questions about GraphQL in general, rather than our project, so we’ve become quite adept at correcting misconceptions and pointing them in the right direction. Because we set the example, our users tend to also act that way, they understand the tone of our community.

Fred: Some things are table stakes now. You’d be surprised at how early contributor guidelines come in handy. You need to answer questions like “How do I clone the repo?” or “How do I get the packages installed?” Maintainers have been in their codebases for so long that they forget how hard it can be for new people to get started. A code of conduct is a must have. You’d be surprised how often it helps if you run into a sticky situation or a gray area.

Klint: I’ve seen maintainers say their projects are too small for a Code of Conduct. Is there a point where it’s too early?

Fred: Not really. If you care about outside contributions at all, you should have a Code of Conduct, regardless of size. It’s more about intent: If your goal is to build a community or encourage outside contributors on your project, then you’ll want a Code of Conduct. If your repo is meant to be a solo project, then I think you’re fine to skip it.

Klint: If you could give just one piece of advice for maintaining a healthy and welcoming community, what would it be?

Chrissy: Diversity and inclusion is not about what you say, it’s about what you do. It’s easy to say “everyone is welcome here,” but it’s hard to actually make people feel welcome. It’s hard to enforce boundaries. It’s hard to call out bad behavior without allowing it to become a pile-on. You have to take responsibility for doing what’s right, even when it’s hard.

Fred: I think the main thing I would say is to be accommodating to everyone. You don’t want to have too many bars for someone to clear in order to get involved, especially in the early days. If they hit a snag, help accommodate them. If you’re dismissive of the problems people face, they’ll leave. First impressions are important.

Jem: I think what has worked for us is that our own replies set the tone of the engagement. If there’s something that has come up that’s really negative, we tend to give it time–we don’t react in the moment. Sometimes someone might not speak English well so their tone doesn’t come across correctly, so someone might sound less kind than they intend. We strive to assume good intent, but from time to time we have to message someone to ask them to be kinder—or to use a more appropriate Discord handle. Leading by example has so far managed to give our community a friendly, non-judgmental atmosphere.


The ReadME Project is a GitHub platform dedicated to highlighting the best from the open source software community—the people and tech behind projects you use every day. Sign up for a monthly newsletter to receive new stories, best practices and opinions developed for The ReadME Project, as well as great listens and reads from around the community.

The post What to do when your open source project becomes a community? appeared first on The GitHub Blog.

]]>
65939
Making technical interviews better for everyone https://github.blog/developer-skills/career-growth/making-technical-interviews-better-for-everyone/ Fri, 03 Jun 2022 09:00:13 +0000 https://github.blog/?p=65425 How to interview for skill, not spare time.

The post Making technical interviews better for everyone appeared first on The GitHub Blog.

]]>
Technical interviews are tough for everyone on both sides of the table. Interviewers need to evaluate whether someone has both the technical chops and social skills to fill a particular role on their team. That need often results in mentally taxing puzzles for candidates to work through on a white board, lengthy take-home exams, and a grueling number of interviews with multiple stakeholders. All too often this leaves candidates exhausted before they even start a job—even as hiring managers remain unsure who is, or isn’t, a good fit. 

There must be a better way. I gathered a panel of experts from the developer community with experience both as job seekers and hiring managers to answer your questions about how to make technical interviews work better for everyone. But first, let’s meet our panel of experts:

Dana Lawson is senior vice president of engineering at Netlify. Previously, she led and scaled teams at GitHub, Heptio, InVision, and New Relic. She has more than 20 years of experience leading teams and wearing various hats to complement a product’s lifecycle. With a true passion for people, she brings this mix of technology and fun to her leadership style.

Kathy Korevec is the vice president of product at Vercel. Kathy has specialized in developer experience for over a decade at companies like Google, Heroku, and most recently on the product leadership team at GitHub. She has a passion for the power of open source, perfecting usability of developer tools, and making it possible for anyone to build and ship world-class software.

Ian Douglas is a senior developer advocate at Postman, and live streams free career prep on Twitch. He authored content for techinterview.guide, where he shares 26 years of experience as a contributor and manager in tech. He loves educating and building communities, running workshops, and helping raise awareness of the importance of diversity in tech.

Q: Let’s start with the elephant in the interview room: the whiteboarding exercise. Are there ways to do this well, or is it time to retire whiteboarding entirely?

Ian: I don’t know that whiteboarding will ever go away. It can be a useful tool, not as much for writing out code by hand, but for things like sketching out systems designs. I stopped having people write syntactically correct code long ago, but it can be a handy way for people to explain their thinking on a problem.

Dana: The important thing isn’t whether someone is writing on a whiteboard or typing on a screen or even just talking through a problem. The important thing is what you learn from it. My team tries to make sure the guiding principle is looking for traits, like deep thinking and problem solving. As long as the guiding principles are solid, it doesn’t matter if you give someone a whiteboard exercise or a take-home exam.

Kathy: It’s really important to give people a real problem to solve, regardless of the medium of the exercise. For years, it was trendy to give people complex algorithmic challenges to prove their worth. I’ve always found these arbitrary. They aren’t related to the actual job. When you give people a real problem, you can get a better sense for how their brain works.

Ian: I entirely agree. Technical exercises, whether they’re on a whiteboard or not, should be relevant to the role. I did an interview with a company building a content management system. The technical exercises they gave me involved using APIs to surface data on things like who went to particular pages, how long they spent, or how many times they visited the page. I thought this was an amazing interview, because it was directly related to the work I would have been doing on their analytics team. These sorts of company and team-specific exercises will help candidates demonstrate whether they have skills that are immediately valuable to your team. Otherwise, you’re just testing how well a candidate can regurgitate algorithms.

Klint: What are some of the best ways to evaluate technical skills other than whiteboarding?

Dana: There are a lot of different ways. I think what’s important is to let candidates choose between a few options. Some people think all candidates for a position should be evaluated with the exact same method for the sake of fairness. But it’s more fair to give people the opportunity to do their best work, and what the situation is for that work will vary from person to person. That said, one way to make sure you’re evaluating fundamentals, like technical knowledge and communication, is pair programming. I really like pair programming, because you can get a sense of what it’s actually like to work with someone.

Ian: Pair programming can be good for candidates too, because they get to see what the day-to-day work is actually like. It can work out if candidates honor an NDA, but the last thing you want is a candidate leaking trade secrets.

Kathy: Pair programming is controversial. It can be kind of intrusive, so I shy away from pairing in interviews. One place I interviewed did starter projects. They basically ask you to come into the office and work with the team you’re interviewing with for three days, solve a problem, and present on the third day. I thought it was way overkill, but I loved it as an interviewee, because I got to learn a lot about what the day-to-day would be like. The challenge, and the reason I wouldn’t do this as a hiring manager, is that it’s a really non-inclusive style. Even if you do it remotely, you’re asking people to take three days of their time for an unpaid interview. It’s a lot to take on as a candidate.

Dana: That’s right. That’s why it’s important to give people choices. Just because someone isn’t comfortable with pair programming, doesn’t mean they aren’t a good programmer or aren’t a good fit for the role. You should give them more than one way to shine. And I can say as a working mother, there’s no easy way to find time for a multi-hour take-home test, let alone a three-day “trial run!” You don’t want to test how much spare time someone has. You want to find out whether they’re a fit for your role.

Kathy: I have people walk me through their portfolio and tell me why they made certain decisions, as well as how they solved different problems along the way. 

Dana: Yes! I prefer to have technical conversations, rather than whiteboarding exercises. If the candidate has an open source project, I like to walk through their code with them. Conversations can be just as useful and revealing as exercises, if not more so. And they’re usually less stressful for candidates. Plus, you learn more about their communications skills. 

Klint: Given the wide range of possible exercises, what would you recommend to candidates to prepare for interviews?

Ian: Practice writing pseudocode and breaking problems down into smaller problems. Interviewers are generally looking to get a sense of how you think and solve problems, so it’s good to get comfortable showing your work, so to speak. I typically coach people not to jump right into code during an interview, even if you’re doing a coding exercise. When you show your planning process, it will help the interviewers learn about you, and it will probably lead to writing better code..

Kathy: Try to find people who work at the company, and find out what the interview process is like. Talk to customers who use the company’s product to get an idea of what they like and what they would like to see improved. Read app store reviews. Get a sense of what people are saying about the company and their products. Coming into an interview with data is very impressive to me, because it shows you’re serious about the role. As a candidate, I like reading things that the leadership at a company has written, from blog posts to tweets. It reveals what they value and how they represent themselves, which is a reflection of the work culture.

Klint: What sorts of things do you see as red flags when you’re interviewing a candidate?

Dana: A lack of curiosity. I expect candidates to be asking questions about the role, the company, the technology we use.

Kathy: Yeah, not asking questions is a big one. Candidates should spend time familiarizing themselves with the company’s products and have some specific questions and ideas in mind. Another red flag is when someone gets really defensive about a question.

Dana: A lack of curiosity. I expect candidates to be asking questions about the role, the company, the technology we use..

Klint: Remote interviews seem like they’re here to stay for the foreseeable future. How is that changing interviewing, and what sort of best practices do you recommend for remote interviews?

Kathy: Remote interviews open up some exciting possibilities, like the opportunity for candidates to share their screens with interviewers and show them how they work. As an interviewer you can see their IDE and get a glimpse of how they work and see what’s going to make them successful at your company.

Dana:  Start with the basics. Prepare the area. Ideally interviewers won’t judge you by your surroundings. Having a messy home shouldn’t be a factor. But let’s face it: You probably will be judged for that, so do your best to present well.

Ian: Just like an in-person interview, you should show up early. Don’t wait until the last minute to join the call, join several minutes early. That gives you time to work out any problems or make necessary adjustments, though you should do as much of that in advance as you can. You might want to restart your computer, or your WiFi router, an hour in advance to make sure everything will be stable for the duration of the interview. Close all the applications you don’t need for an interview. Remember that for a remote interview, good audio is more important than video, so be sure to deal with any potential sources of noise. If you can invest in a good microphone, it’s worthwhile. Have a contingency plan: a way to dial into the conference if all else fails, for example.

Klint: Any other thoughts or tips?

Dana:  It’s worth investing in getting better at conducting interviews. You should train your team in interviewing. You can’t expect to be good at doing interviews if you don’t practice. If you are hiring a lot, it is easy to get complacent and not deep dive into all the important areas versus running down a checklist of questions. Role-playing for interviews is a great tool to prepare a person to conduct an interview. Meanwhile, this gives candidates insight into who they will be working with and how people show up at the workplace, and whether they share the same values that are important to you personally.

Ian: This is contentious, but when the interview is over don’t be afraid to give feedback on what candidates did well. Hopefully they can read between the lines and figure out what they didn’t do well. Sometimes interviewers are told “don’t give feedback for any reason.” But it doesn’t feel good to not receive any feedback. It’s especially important for entry-level developers to get feedback, otherwise they’re not going to get better.


The ReadME Project is a GitHub platform dedicated to highlighting the best from the open source software community—the people and tech behind projects you use every day. Sign up for a monthly newsletter to receive new stories, best practices and opinions developed for The ReadME Project, as well as great listens and reads from around the community.

The post Making technical interviews better for everyone appeared first on The GitHub Blog.

]]>
65425