Docker https://www.docker.com Tue, 24 Dec 2024 10:39:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 https://www.docker.com/wp-content/uploads/2024/02/cropped-docker-logo-favicon-32x32.png Docker https://www.docker.com 32 32 The Model Context Protocol: Simplifying Building AI apps with Anthropic Claude Desktop and Docker https://www.docker.com/blog/the-model-context-protocol-simplifying-building-ai-apps-with-anthropic-claude-desktop-and-docker/ Mon, 23 Dec 2024 22:52:01 +0000 https://www.docker.com/?p=66677 Anthropic recently unveiled the Model Context Protocol (MCP), a new standard for connecting AI assistants and models to reliable data and tools. However, packaging and distributing MCP servers is very challenging due to complex environment setups across multiple architectures and operating systems. Docker is the perfect solution for this — it allows developers to encapsulate their development environment into containers, ensuring consistency across all team members’ machines and deployments consistent and predictable. In this blog post, we provide a few examples of using Docker to containerize Model Context Protocol (MCP) to simplify building AI applications. 

2400x1260 evergreen docker blog c

What is Model Context Protocol (MCP)?

MCP (Model Context Protocol), a new protocol open-sourced by Anthropic, provides standardized interfaces for LLM applications to integrate with external data sources and tools. With MCP, your AI-powered applications can retrieve data from external sources, perform operations with third-party services, or even interact with local filesystems.

Among the use cases enabled by this protocol is the ability to expose custom tools to AI models. This provides key capabilities such as:

  • Tool discovery: Helping LLMs identify tools available for execution
  • Tool invocation: Enabling precise execution with the right context and arguments

Since its release, the developer community has been particularly energized. We asked David Soria Parra, Member of Technical Staff from Anthropic, why he felt MCP was having such an impact: “Our initial developer focus means that we’re no longer bound to one specific tool set.  We are giving developers the power to build for their particular workflow.”

How does MCP work? What challenges exist?

MCP works by introducing the concept of MCP clients and MCP Servers — clients request resources and the servers handle the request and perform the requested action. MCP Clients are often embedded into LLM-based applications, such as the Claude Desktop App. The MCP Servers are launched by the client to then perform the desired work using any additional tools, languages, or processes needed to perform the work.

Examples of tools include filesystem access, GitHub and GitLab repo management, integrations with Slack, or retrieving or modifying state in Kubernetes clusters.

Figure 1: A high-level architecture diagram of MCP client and server interactions

The goal of MCP servers is to provide reusable toolsets and reuse them across clients, like Claude Desktop – write one set of tools and reuse them across many LLM-based applications. But, packaging and distributing these servers is currently a challenge. Specifically:

  1. Environment conflicts: Installing MCP servers often requires specific versions of Node.js, Python, and other dependencies, which may conflict with existing installations on a user’s machine
  2. Lack of host isolation: MCP servers currently run on the host, granting access to all host files and resources
  3. Complex setup: MCP servers currently require users to download and configure all of the code and configure the environment, making adoption difficult
  4. Cross-platform challenges: Running the servers consistently across different architectures (e.g., x86 vs. ARM, Windows vs Mac) or operating systems introduces additional complexity
  5. Dependencies: Ensuring that server-specific runtime dependencies are encapsulated and distributed safely.

How does Docker help?

Docker solves these challenges by providing a standardized method and tooling to develop, package, and distribute applications, including MCP servers. By packaging these MCP servers as containers, the challenges of isolation or environment differences disappear. Users can simply run a container, rather than spend time installing dependencies and configuring the runtime.

Docker Desktop provides a development platform to build, test, and run these MCP servers. Docker Hub is the world’s largest repository of container images, making it the ideal choice to distribute containerized MCP servers. Docker Scout helps ensure images are kept secure and free of vulnerabilities. Docker Build Cloud helps you build images more quickly and reliably, especially when cross-platform builds are required.

The Docker suite of products brings benefits to both publishers and consumers — publishers can easily package and distribute their servers and consumers can easily download and run them with little to no configuration.

Again quoting David Soria Parra, 

“Building an MCP server for ffmpeg would be a tremendously difficult undertaking without Docker. Docker is one of the most widely used packaging solutions for developers. The same way it solved the packaging problem for the cloud, it now has the potential to solve the packaging problem for rich AI agents”. 

Figure 2: Architecture diagram demonstrating MCP servers running in a Docker container

As we continue to explore how MCP allows us to connect to existing ecosystems of tools, we also envision MCP bridges to existing containerized tools.

Figure 3: Architecture diagram that shows a single MCP server calling multiple tools in their own containers

Try it yourself with containerized Reference Servers

As part of publishing the specification, Anthropic published an initial set of reference servers. We have worked with the Anthropic team to create Docker images for these servers and make them available from the new Docker Hub mcp namespace.

Developers can try this out today using Claude Desktop as the MCP client and Docker Desktop to run any of the reference servers by updating your claude_desktop_config.json file.

The list of current servers documents how to update the claude_desktop_config.json to activate these MCP server docker containers on your local host.

Using Puppeteer to take and modify screenshots using Docker

This demo will use the Puppeteer MCP server to take a screenshot of a website and invert the colors using Claude Desktop and Docker Desktop. Doing this without a containerized environment requires quite a bit of setup, but is fairly trivial using containers.

  1. Update your claude_desktop_config.json file to include the following configuration:

For example, extending Claude Desktop to use puppeteer for browser automation and web scraping requires the following entry (which is fully documented here):

{
  "mcpServers": {
    "puppeteer": {
      "command": "docker",
      "args": ["run", "-i", "--rm", "--init", "-e", "DOCKER_CONTAINER=true", "mcp/puppeteer"]
    }
  }
}
  1. Restart Claude Desktop to apply the changed config file.
  2. Submit the following prompt using the Sonnet 3.5 model:

    Take a screenshot of docs.docker.com and then invert the colors
  3. Claude will run through several consent screens ensuring that you’re okay running these new tools.
  4. After a brief moment, you’ll have your requested screenshot

What happened? Claude planned out a series of tool calls, starting the puppeteer MCP server in a container, and then used the headless browser in that container to navigate to a site, grab a screenshot, invert the colors on the page, and then finally grab a screenshot of the altered page.

Figure 4: Running Dockerized Puppeteer in Claude Desktop to invert colors on https://docs.docker.com/

Next steps

There’s already a lot that developers can try with this first set of servers. For an educational glimpse into what’s possible with database containers, we recommend that you connect the sqlite server container, and run the sample prompt that it provides. It’s an eye-opening display of what’s already possible today. Plus, the demo is containerized!

We’re busy adding more content to enable you to easily build and distribute your own MCP docker images. We are also encouraging and working closely with the community to package more Docker containers. Please reach out with questions in the discussion group.  

Learn more

]]>
Running Dockerized Puppeteer in Claude Desktop nonadult
Recipe for Efficient Development: Simplify Collaboration and Security with Docker https://www.docker.com/blog/recipe-for-efficient-development-simplify-collaboration-security-with-docker/ Fri, 20 Dec 2024 13:23:32 +0000 https://www.docker.com/?p=66313 Collaboration and security are essential for delivering high-quality applications in modern software development, especially in cloud-native environments. Developers navigate intricate workflows, connect diverse systems, and safeguard applications against emerging threats — all while maintaining velocity and efficiency.

Think of development as preparing a multi-course meal in a high-pressure, professional kitchen, where precision, timing, and communication are critical. Each developer is a chef working on different parts of the dish, passing ingredients (code) along the way. When one part of the system encounters delays, it can ripple across the process, impacting the final result. Similarly, poor collaboration or security gaps can derail a project, causing delays and inefficiencies. 

Docker serves as the kitchen manager, ensuring everything flows smoothly, ingredients are passed securely, and security is integrated from start to finish.

2400x1260 evergreen docker blog c

Seamless collaboration with Docker Hub and Testcontainers Cloud

Success in a professional kitchen depends on clear communication and coordination. In development, it’s no different. Docker’s collaboration tools, like Docker Hub and Testcontainers Cloud, simplify how teams work together, share resources, and test efficiently.

  • Docker Hub can be thought of as a kitchen’s “prepped ingredients station.” It’s where some of the most essential ingredients are always ready to go. With a vast selection of curated, trusted images, developers can quickly access high-quality, pre-configured containers, ensuring consistency and reducing the chance for mistakes.
  • Testcontainers Cloud is like the kitchen’s test station, providing on-demand, production-like environments for testing. Developers can spin up these environments quickly, reducing setup time and ensuring code performs in a real-world setting. 

Effective coordination is critical whether you’re in a kitchen or on a development team, especially when projects involve distributed or hybrid teams. Clear communication ensures everyone is aligned and productive. The Docker suite of products provides the tools that make it possible for companies to more easily break down silos, share resources seamlessly, and ensure alignment — no matter how large your team is or where they work!

By streamlining collaboration, Docker reduces complexity and allows teams to move forward with confidence. With Docker Hub, Testcontainers Cloud, and integrated security features, teams can share resources, track progress, and catch issues early, enabling them to deliver high-quality results on time.

These tools improve efficiency, reduce errors, and help teams move faster through the development inner loop by making collaboration seamless and resource sharing simple.

Integrated security from code to production

Embedding security into every development step is essential to maintaining speed and delivering high-quality software. With Docker, security is embedded into every step of the development process so teams can identify and fix issues earlier than ever.

  • Docker Scout monitors container images in real-time, identifying vulnerabilities early to ensure your software is production-ready. By identifying and resolving risks early, developers can maintain high-quality standards and accelerate time to market.

Docker also integrates additional security features that work behind the scenes:

By building security into the workflow, Docker helps teams identify risks earlier, improve code quality, and maintain momentum without compromising safety.

Efficiency in action with Docker

Speed, collaboration, and security are paramount in today’s development landscape. Docker simplifies and secures the development process, helping teams collaborate efficiently and deliver secure, high-quality software faster.

Just as a well-managed kitchen runs smoothly, Docker helps development teams stay coordinated, ensuring security and productivity work together in perfect harmony. Docker removes complexity, accelerates delivery, and embeds security, enabling teams to create efficient, secure applications on time.

Ready to boost efficiency and collaboration in your development process? Explore the Docker suite of products to see how they can streamline your workflow and improve your team’s productivity today. 

To learn more about fueling development efficiency, download our white paper, Reducing Every-Day Complexities for More Efficient Software Development with Docker.

]]>
Building Trust into Your Software with Verified Components https://www.docker.com/blog/building-trust-into-your-software-with-verified-components/ Thu, 19 Dec 2024 13:55:51 +0000 https://www.docker.com/?p=66325 Within software development, security and compliance are more than simple boxes to check. Each attestation and compliance check is backed by a well-considered risk assessment that aims to avoid ever-changing vulnerabilities and attack vectors. Software development teams don’t want to worry about vulnerabilities when they are focused on building something remarkable.

In this article, we explain how Docker Hub and Docker Scout can help development teams ensure a more secure and compliant software supply chain. 

2400x1260 security column 072024

Security starts with trusted foundations

Every structure needs a strong foundation. A weak base is where cracks begin to show. Using untrusted or outdated software is like building a skyscraper on sand, and security issues can derail progress, leading to costly fixes and delayed releases. By “shifting security left” — addressing vulnerabilities early in the development process — teams can avoid these setbacks down the road.  

Modern development demands a secure and compliant software supply chain. Unverified software or vulnerabilities buried deep within base images can become costly compliance issues, disrupting development timelines and eroding customer trust. One weak link in the supply chain can snowball into more significant issues, affecting product delivery and customer satisfaction. Without security and compliance checks, organizations will lack the credibility their customers rely on.

How Docker Hub and Scout help teams shift left

Software developers are like a construction crew building a skyscraper. The process requires specialized components — windows, elevators, wiring, concrete, and so on — which are found at a single supply depot and which work in harmony with each other. This idea is similar to microservices, which are pieced together to create modern applications. In this analogy, Docker Hub acts as the supply depot for a customer’s software supply chain, stocked with trusted container images that help developer teams streamline development.

Docker Hub is more than a container registry. It is the most widely trusted content distribution platform built on secure, verified, and dependable container images. Docker Official Images (DOI) and Docker Verified Publisher (DVP) programs provide a rock-solid base to help minimize risks and let development teams focus on creating their projects. 

Docker Hub simplifies supply chain security by ensuring developers start with trusted components. Its library of official and verified publisher images offers secure, up-to-date resources vetted for compliance and reliability, eliminating the risk of untrusted or outdated components.

Proactive risk management is critical to software development

To avoid breaking production environments, organizations need to plan ahead by catching and tracking common vulnerabilities and exposures (CVEs) early in the development process. Docker Scout enables proactive risk management by integrating security checks early in the development lifecycle. Scout reduces the likelihood of security incidents and streamlines the development process.

Additionally, Docker Scout Health Scores provide a straightforward framework for evaluating the security posture of container images used daily by development teams. Using an easy-to-understand alphabetical grading system (A to F), these scores assess CVEs in software components within Docker Hub. This feature lets developers quickly evaluate and select trusted content, ensuring a secure software supply chain.

Avoid shadow changes with IAM and RBAC for secure collaboration

Compliance is not glamorous, but it is essential to running a business. Development teams don’t want to have to worry about whether they are meeting industry standards — they want to know they are. Docker Hub makes compliance simple with pre-certified images and many features that take the guesswork out of governance. That means you can stay compliant while your teams keep growing and innovating.

The biggest challenge to scaling a team or growing your development operations is not about adding people — it’s about maintaining control without losing momentum. Tracking, reducing, and managing shadow changes means that your team does not lose the flow state in development velocity. 

Docker Hub’s Image Access Management (IAM) enforces precise permissions to ensure that only authorized people have access to modify sensitive information in repositories. Additionally, with role-based access control (RBAC), you’re not just delegating; you’re empowering your team with predefined roles that streamline onboarding, reduce mistakes, and keep everyone moving in harmony.

Docker Hub’s activity logs provide another layer of confidence as they let you track changes, enforce compliance, and build trust. These capabilities enhance security and boost collaboration by creating an environment where team members can focus on delivering high-quality applications.

Built-in trust

Without verified components, development teams can end up playing whack-a-mole with vulnerabilities. Time is lost. Money is spent. Trust is damaged. Now, picture a team working with trusted content and images that integrate security measures from the start. They deliver on time, on budget, and with confidence.

Building security into your applications doesn’t slow you down; it’s your superpower. Docker weaves trust and security into every part of your development process. Your applications are safeguarded, your delivery is accelerated, and your team is free to focus on what matters most — creating value.

Start your journey today. With Docker, you’re not just developing applications but building trust. Learn how trusted components help simplify compliance, enhance security, and empower your team to innovate fearlessly. 

Learn more

]]>
Docker Desktop 4.37: AI Catalog and Command-Line Efficiency https://www.docker.com/blog/docker-desktop-4-37/ Wed, 18 Dec 2024 17:37:08 +0000 https://www.docker.com/?p=66560 Key features of the Docker Desktop 4.37 release include: 

The Docker Desktop 4.37 release brings incremental improvements that make developers’ lives easier by addressing common challenges in modern software development. With a focus on integrating AI resources and streamlining operational workflows, this update ensures developers can work faster, smarter, and more effectively.

1920x1080 4.37 docker desktop release

Unlocking AI-driven development with Docker AI Catalog integration

AI/ML development is exploding, but many developers face hurdles accessing prebuilt AI models and tools. They often need to search across multiple platforms, wasting valuable time piecing together resources and overcoming compatibility issues. This fragmentation slows down innovation and makes it harder for teams to bring AI-driven features into their applications.

With Docker Desktop 4.37, the AI Catalog in Docker Hub is now accessible directly through Docker Desktop. This seamless integration enables developers to discover, pull, and integrate AI models into their workflows effortlessly. Whether you’re incorporating pretrained machine learning models or exploring generative AI tools, Docker Desktop ensures these resources are just a click away.

Accessing AI Catalog from DD
Figure 1: AI Catalog in Docker Hub is now accessible directly through Docker Desktop.

Key benefits:

  • Streamlined discovery: You don’t need to leave your development environment to find AI tools. The AI Catalog is built into Docker Hub and can be immediately accessed from Docker Desktop.
  • Faster prototyping: By eliminating friction in accessing AI resources, teams can focus on building and iterating faster.
  • Enhanced compatibility: Docker’s containerized approach ensures AI models run consistently across environments, reducing setup headaches.

Whether you’re developing cutting-edge AI/ML applications or just beginning to experiment with AI tools, this integration empowers developers to innovate without distraction.

Command-line operations: Control Docker Desktop your way

For developers who automate workflows or work heavily in terminal environments, relying solely on graphical user interfaces (GUIs) can be limiting. Starting, stopping, or troubleshooting Docker Desktop often requires GUI navigation, which can disrupt automation pipelines and slow down power users.

Docker Desktop 4.37 introduces robust command-line capabilities for managing Docker Desktop itself. Developers can now perform essential tasks such as starting, stopping, restarting, and checking the status of Docker Desktop directly from the command line.

Key benefits:

  • Improved automation: Script Docker Desktop operations into CI/CD workflows, eliminating manual intervention.
  • Faster troubleshooting: Check the status and restart Docker Desktop without leaving the terminal, streamlining issue resolution.
  • Developer flexibility: A smoother, distraction-free experience for developers who prefer terminal-based workflows.

This new feature bridges the gap between GUI and command-line preferences, allowing developers to tailor their workflows to their needs.

Upgraded components: Keeping developers ahead

Docker Desktop 4.37 includes significant upgrades to its underlying components, bringing enhanced performance, security, and feature sets such as GPU- accelerated workflows. 

Here’s what’s new:

Bug fixes and stability improvements

At Docker, we aim to provide a stable and dependable development platform so developer teams can focus on creating, not troubleshooting. Docker Desktop 4.37 also addresses several key bugs and usability concerns:

  • Default disk usage limit: New installations now default to a 1TB disk limit, offering additional flexibility for developers with large containerized applications.
  • Loopback AF_VSOCK connections: Fixed to ensure container communication reliability.
  • CLI context reset fixes: Prevent unintended resets when restoring default settings.
  • Dashboard synchronization: Ensures consistent behavior between the Docker Desktop Dashboard and the Docker daemon after engine restarts.
  • Resource Saver mode stability: Resolves issues with mode reengagement, improving power efficiency for resource-conscious users.

Wrapping up 

Docker Desktop 4.37 offers a step forward in enabling developers to innovate. With a focus on AI-driven development and automation-friendly operations, this release aligns with the evolving needs of modern software teams.

Learn more

]]>
Accelerate Your Docker Builds Using AWS CodeBuild and Docker Build Cloud https://www.docker.com/blog/accelerate-your-docker-builds-using-aws-codebuild-and-docker-build-cloud/ Wed, 18 Dec 2024 13:10:42 +0000 https://www.docker.com/?p=66293 Containerized application development has revolutionized modern software delivery, but slow image builds in CI/CD pipelines can bring developer productivity to a halt. Even with AWS CodeBuild automating application testing and building, teams face challenges like resource constraints, inefficient caching, and complex multi-architecture builds that lead to delays, lower release frequency, and prolonged recovery times.

Enter Docker Build Cloud, a high-performance cloud service designed to streamline image builds, integrate seamlessly with AWS CodeBuild, and reduce build times dramatically. With Docker Build Cloud, you gain powerful cloud-based builders, shared caching, and native multi-architecture support — all while keeping your CI/CD pipelines efficient and your developers focused on delivering value faster.

In this post, we’ll explore how AWS CodeBuild combined with Docker Build Cloud tackles common bottlenecks, boosts build performance, and simplifies workflows, enabling teams to ship more quickly and reliably.

2400x1260 generic dbc blog e

By using AWS CodeBuild, you can automate the build and testing of container applications, enabling the construction of efficient CI/CD workflows. AWS CodeBuild is also integrated with AWS Identity and Access Management (IAM), allowing detailed configuration of access permissions for build processes and control over AWS resources.

Container images built with AWS CodeBuild can be stored in Amazon Elastic Container Registry (Amazon ECR) and deployed to various AWS services, such as Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Kubernetes Service (Amazon EKS), AWS Fargate, or AWS Lambda (Figure 1). Additionally, these services can leverage AWS Graviton, which adopts Arm-based architectures, to improve price performance for compute workloads.

Illustration of CI/CD pipeline outlining steps for check in code, source code commit, build code, and deploy code.
Figure 1: CI/CD pipeline for AWS ECS using AWS CodeBuild (ECS Workshop).

Challenges of container image builds with AWS CodeBuild

Regardless of the tool used, building container images in a CI pipeline often takes a significant amount of time. This can lead to the following issues:

  • Reduced development productivity
  • Lower release frequency
  • Longer recovery time in case of failures

The main reasons why build times can be extended include:

1. Machines for building

Building container images requires substantial resources (CPU, RAM). If the machine specifications used in the CI pipeline are inadequate, build times can increase.

For simple container image builds, the impact may be minimal, but in cases of multi-stage builds or builds with many dependencies, the effect can be significant.

AWS CodeBuild allows changing instance types to improve these situations. However, such changes can apply to parts of the pipeline beyond container image builds, and they also increase costs.

Developers need to balance cost and build speed to optimize the pipeline.

2. Container image cache

In local development environments, Docker’s build cache can shorten rebuild times significantly by reusing previously built layers, avoiding redundant processing for unchanged parts of the Dockerfile. However, in cloud-based CI services, clean environments are used by default, so cache cannot be utilized, resulting in longer build times.

Although there are ways to use storage or container registries to leverage caching, these often are not employed because they introduce complexity in configuration and overhead from uploading and downloading cache data.

3. Multi-architecture builds (AMD64, Arm64)

To use Arm-based architectures like AWS Graviton in Amazon EKS or Amazon ECS, Arm64-compatible container image builds are required.

With changes in local environments, such as Apple Silicon, cases requiring multi-architecture support for AMD64 and Arm64 have increased. However, building images for different architectures (for example, building x86 on Arm, or vice versa) often requires emulation, which can further increase build times (Figure 2).

Although AWS CodeBuild provides both AMD64 and Arm64 instances, running them as separate pipelines is necessary, leading to more complex configurations and operations.

Illustration of steps for creating multi-architecture Docker images including Build and push, Test, Build/push multi-arch manifest, Deploy.
Figure 2: Creating multi-architecture Docker images using AWS CodeBuild.

Accelerating container image builds with Docker Build Cloud

The Docker Build Cloud service executes the Docker image build process in the cloud, significantly reducing build time and improving developer productivity (Figure 3).

Illustration of how Docker Build Cloud works, showing CI Runner/CI job, Local Machine, and Cloud Builder elements.
Figure 3: How Docker Build Cloud works.

Particularly in CI pipelines, Docker Build Cloud enables faster container image builds without the need for significant changes or migrations to existing pipelines.

Docker Build Cloud includes the following features:

  • High-performance cloud builders: Cloud builders equipped with 16 vCPUs and 32GB RAM are available. This allows for faster builds compared to local environments or resource-constrained CI services.
  • Shared cache utilization: Cloud builders come with 200 GiB of shared cache, significantly reducing build times for subsequent builds. This cache is available without additional configuration, and Docker Build Cloud handles the cache maintenance for you.
  • Multi-architecture support (AMD64, Arm64): Docker Build Cloud supports native builds for multi-architecture with a single command. By specifying --platform linux/amd64,linux/arm64 in the docker buildx build command or using Bake, images for both Arm64 and AMD64 can be built simultaneously. This approach eliminates the need to split the pipeline for different architectures.

Architecture of AWS CodeBuild + Docker Build Cloud

Figure 4 shows an example of how to use Docker Build Cloud to accelerate container image builds in AWS CodeBuild:

Illustration of of AWS CodeBuild pipeline showing flow from Source Code to AWS CodeBuild, to Docker Build Cloud to Amazon ECR.
Figure 4: AWS CodeBuild + Docker Build Cloud architecture.
  1. The AWS CodeBuild pipeline is triggered from a commit to the source code repository (AWS CodeCommit, GitHub, GitLab).
  2. Preparations for running Docker Build Cloud are made in AWS CodeBuild (Buildx installation, specifying Docker Build Cloud builders).
  3. Container images are built on Docker Build Cloud’s AMD64 and Arm64 cloud builders.
  4. The built AMD64 and Arm64 container images are pushed to Amazon ECR.

Setting up Docker Build Cloud

First, set up Docker Build Cloud. (Note that new Docker subscriptions already include a free tier for Docker Build Cloud.)

Then, log in with your Docker account and visit the Docker Build Cloud Dashboard to create new cloud builders.

Once the builder is successfully created, a guide is displayed for using it in local environments (Docker Desktop, CLI) or CI/CD environments (Figure 5).

Screenshot from Docker Build Cloud showing setup instructions with local installation selected.
Figure 5: Setup instructions of Docker Build Cloud.

Additionally, to use Docker Build Cloud from AWS CodeBuild, a Docker personal access token (PAT) is required. Store this token in AWS Secrets Manager for secure access.

Setting up the AWS CodeBuild pipeline

Next, set up the AWS CodeBuild pipeline. You should prepare an Amazon ECR repository to store the container images beforehand.

The following settings are used to create the AWS CodeBuild pipeline:

  • AMD64 instance with 3GB memory and 2 vCPUs.
  • Service role with permissions to push to Amazon ECR and access the Docker personal access token from AWS Secrets Manager.

The buildspec.yml file is configured as follows:

version: 0.2

env:
  variables:
    ARCH: amd64
    ECR_REGISTRY: [ECR Registry]
    ECR_REPOSITORY: [ECR Repository]
    DOCKER_ORG: [Docker Organization]
  secrets-manager:
    DOCKER_USER: ${SECRETS_NAME}:DOCKER_USER
    DOCKER_PAT: ${SECRETS_NAME}:DOCKER_PAT

phases:
  install:
    commands:
      # Installing Buildx
      - BUILDX_URL=$(curl -s https://raw.githubusercontent.com/docker/actions-toolkit/main/.github/buildx-lab-releases.json | jq -r ".latest.assets[] | select(endswith(\"linux-$ARCH\"))")
      - mkdir -vp ~/.docker/cli-plugins/
      - curl --silent -L --output ~/.docker/cli-plugins/docker-buildx $BUILDX_URL
      - chmod a+x ~/.docker/cli-plugins/docker-buildx

  pre_build:
    commands:
      # Logging in to Amazon ECR
      - aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $ECR_REGISTRY
      # Logging in to Docker (Build Cloud)
      - echo "$DOCKER_PAT" | docker login --username $DOCKER_USER --password-stdin
      # Specifying the cloud builder
      - docker buildx create --use --driver cloud $DOCKER_ORG/demo

  build:
    commands:
      # Image tag
      - IMAGE_TAG=$(echo ${CODEBUILD_RESOLVED_SOURCE_VERSION} | head -c 7)
      # Build container image & push to Amazon ECR
      - docker buildx build --platform linux/amd64,linux/arm64 --push --tag "${ECR_REGISTRY}/${ECR_REPOSITORY}:${IMAGE_TAG}" .

In the install phase, Buildx, which is necessary for using Docker Build Cloud, is installed.

Although Buildx may already be installed in AWS CodeBuild, it might be an unsupported version for Docker Build Cloud. Therefore, it is recommended to install the latest version.

In the pre_build phase, the following steps are performed:

  • Log in to Amazon ECR.
  • Log in to Docker (Build Cloud).
  • Specify the cloud builder.

In the build phase, the image tag is specified, and the container image is built and pushed to Amazon ECR.

Instead of separating the build and push commands, using --push to directly push the image to Amazon ECR helps avoid unnecessary file transfers, contributing to faster builds.

Results comparison

To make a comparison, an AWS CodeBuild pipeline without Docker Build Cloud is created. The same instance type (AMD64, 3GB memory, 2vCPU) is used, and the build is limited to AMD64 container images.

Additionally, Docker login is used to avoid the pull rate limit imposed by Docker Hub.

version: 0.2

env:
  variables:
    ECR_REGISTRY: [ECR Registry]
    ECR_REPOSITORY: [ECR Repository]
  secrets-manager:
    DOCKER_USER: ${SECRETS_NAME}:DOCKER_USER
    DOCKER_PAT: ${SECRETS_NAME}:DOCKER_PAT

phases:
  pre_build:
    commands:
      # Logging in to Amazon ECR
      - aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $ECR_REGISTRY
      # Logging in to Docker
      - echo "$DOCKER_PAT" | docker login --username $DOCKER_USER --password-stdin

  build:
    commands:
      # Image tag
      - IMAGE_TAG=$(echo ${CODEBUILD_RESOLVED_SOURCE_VERSION} | head -c 7)
      # Build container image & push to Amazon ECR
      - docker build --push --tag "${ECR_REGISTRY}/${ECR_REPOSITORY}:${IMAGE_TAG}" .

Figure 6 shows the result of the execution:

Screenshot of results using AWS CodeBuild pipeline without Docker Build Cloud, showing execution time of 5 minutes and 59 seconds.
Figure 6: The result of the execution without Docker Build Cloud.

Figure 7 shows the execution result of the AWS CodeBuild pipeline using Docker Build Cloud:

Screenshot of results using AWS CodeBuild pipeline with Docker Build Cloud, showing execution time of 1 minutes and 4 seconds.
Figure 7: The result of the execution with Docker Build Cloud.

The results may vary depending on the container images being built and the state of the cache, but it was possible to build container images much faster and achieve multi-architecture builds (AMD64 and Arm64) within a single pipeline.

Conclusion

Integrating Docker Build Cloud into a CI/CD pipeline using AWS CodeBuild can dramatically reduce build times and improve release frequency. This allows developers to maximize productivity while delivering value to users more quickly.

As mentioned previously, the new Docker subscription already includes a free tier for Docker Build Cloud. Take advantage of this opportunity to test how much faster you can build container images for your current projects.

Learn more

]]>
Docker 2024 Highlights: Innovations in AI, Security, and Empowering Development Teams https://www.docker.com/blog/docker-2024-highlights/ Tue, 17 Dec 2024 13:45:23 +0000 https://www.docker.com/?p=66348 In 2024, as developers and engineering teams focused on delivering high-quality, secure software faster, Docker continued to evolve with impactful updates and a streamlined user experience. This commitment to empowering developers was recognized in the annual Stack Overflow Developer Survey, where Docker ranked as one of the most loved and widely used tools for yet another year. Here’s a look back at Docker’s 2024 milestones and how we helped teams build, test, and deploy with greater ease, security, and control than ever.

2400x1260 docker evergreen logo blog D 1

Streamlining the developer experience

Docker focused heavily on streamlining workflows, creating efficiencies, and reducing the complexities often associated with managing multiple tools. One big announcement in 2024 is our upgraded Docker plans. With the launch of updated Docker subscriptions, developers now have access to the entire suite of Docker products under their existing subscription. 

The all-in-one subscription model enables seamless integration of Docker Desktop, Docker Hub, Docker Build Cloud, Docker Scout, and Testcontainers Cloud, giving developers everything they need to build efficiently. By providing easy access to the suite of products and flexibility to scale, Docker allows developers to focus on what matters most — building and innovating without unnecessary distractions.

For more details on Docker’s all-in-one subscription approach, check out our Docker plans announcement.

Build up to 39x faster with Docker Build Cloud

Docker Build Cloud, introduced in 2024, brings the best of two worlds — local development and the cloud to developers and engineering teams worldwide. It offloads resource-intensive build processes to the cloud, ensuring faster, more consistent builds while freeing up local machines for other tasks.

A standout feature is shared build caches, which dramatically improve efficiency for engineering teams working on large-scale projects. Shared caches allow teams to avoid redundant rebuilds by reusing intermediate layers of images across builds, accelerating iteration cycles and reducing resource consumption. This approach is especially valuable for collaborative teams working on shared codebases, as it minimizes duplicated effort and enhances productivity.

Docker Build Cloud also offers native support for multi-architecture builds, eliminating the need for setting up and maintaining multiple native builders. This support removes the challenges associated with emulation, further improving build efficiency.

We’ve designed Docker Build Cloud to be easy to set up wherever you run your builds, without requiring a massive lift-and-shift effort. Docker Build Cloud also works well with Docker Compose, GitHub Actions, and other CI solutions. This means you can seamlessly incorporate Docker Build Cloud into your existing development tools and services and immediately start reaping the benefits of enhanced speed and efficiency.

Check out our build time savings calculator to estimate your potential savings in hours and dollars. 

Optimizing development workflows with performance enhancements

In 2024, Docker Desktop introduced a series of enterprise-grade performance enhancements designed to streamline development workflows at scale. These updates cater to the unique needs of development teams operating in diverse, high-performance environments.

One notable feature is the Virtual Machine Manager (VMM) in Docker Desktop for Mac, which provides a robust alternative to the Apple Virtualization Framework. Available since Docker Desktop 4.35, VMM significantly boosts performance for native Arm-based images, delivering faster and more efficient workflows for M1 and M2 Mac users. For development teams relying on Apple’s latest hardware, this enhancement translates into reduced build times and a smoother experience when working with containerized applications.

Additionally, Docker Desktop expanded its platform support to include Red Hat Enterprise Linux (RHEL) and Windows on Arm architectures, enabling organizations to maintain a consistent Docker Desktop experience across a wide array of operating systems. This flexibility ensures that development teams can optimize their workflows regardless of the underlying platform, leveraging platform-specific optimizations while maintaining uniformity in their tooling.

These advancements reflect Docker’s unwavering commitment to speed, reliability, and cross-platform support, ensuring that development teams can scale their operations without bottlenecks. By minimizing downtime and enhancing performance, Docker Desktop empowers developers to focus on innovation, improving productivity across even the most demanding enterprise environments.

More options to improve file operations for large projects

We enhanced Docker Desktop with synchronized file shares (Figure 1), a feature that can significantly improve file operation speeds by 2-10x. This enhancement brings fast and flexible host-to-VM file sharing, offering a performance boost for developers dealing with extensive codebases.

Synchronized file sharing is ideal for developers who:

  • Develop on projects that consist of a significant number of files (such as PHP or Node projects).
  • Develop using large repositories or monorepos with more than 100,000 files, totaling significant storage.
  • Utilize virtual file systems (such as VirtioFS, gRPC FUSE, or osxfs) and face scalability issues with their workflows.
  • Encounter performance limitations and want a seamless file-sharing solution without worrying about ownership conflicts.

This integration streamlines workflows, allowing developers to focus more on coding and less on managing file synchronization issues and slow file read times. 

Screenshot of Docker Desktop showing Synchronized file shares within Resources.
Figure 1: Synchronized file shares.

Enhancing developer productivity with Docker Debug 

Docker Debug enhances the ability of developer teams to debug any container, especially those without a shell (that is, distroless or scratch images). The ability to peek into “secure” images significantly improves the debugging experience for both local and remote containerized applications. 

Docker Debug does this by attaching a dedicated debugging toolkit to any image and allows developers to easily install additional tools for quick issue identification and resolution. Docker Debug not only streamlines debugging for both running and stopped containers but also is accessible directly from both the Docker Desktop CLI and GUI (Figure 2). 

Screenshot of Docker Desktop showing Docker Debug.
Figure 2: Docker Debug.

Being able to troubleshoot images without modifying them is crucial for maintaining the security and performance of containerized applications, especially those images that traditionally have been hard to debug. Docker Debug offers:

  • Streamlined debugging process: Easily debug local and remote containerized applications, even those not running, directly from Docker Desktop.
  • Cross-device and cloud compatibility: Initiate debugging effortlessly from any device, whether local or in the cloud, enhancing flexibility and productivity.

Docker Debug improves productivity and seamless integration. The docker debug command simplifies attaching a shell to any container or image. This capability reduces the cognitive load on developers, allowing them to focus on solving problems rather than configuring their environment. 

Ensuring reliable image builds with Docker Build checks

Docker Desktop 4.33 was a big release because, in addition to including the GA release of Docker Debug, it included the GA release of Docker Build checks, a new feature that ensures smoother and more reliable image builds. Build checks automatically validate common issues in your Dockerfiles before the build process begins, catching errors like invalid syntax, unsupported instructions, or missing dependencies. By surfacing these issues upfront, Docker Build checks help developers save time and avoid costly build failures.

You can access Docker Build checks in the CLI and in the Docker Desktop Builds view. The feature also works seamlessly with Docker Build Cloud, both locally and through CI. Whether you’re optimizing your Dockerfiles or troubleshooting build errors, Docker Build checks let you create efficient, high-quality container images with confidence — streamlining your development workflow from start to finish.

Onboarding and learning resources for developer success  

To further reduce friction, Docker revamped its learning resources and integrated new tools to enhance developer onboarding. By adding beginner-friendly tutorials, Docker’s learning center makes it easier for developers to ramp up and quickly learn to use Docker tools, helping them spend more time coding and less time troubleshooting. 

As Docker continues to rank as a top developer tool globally, we’re dedicated to empowering our community with continuous learning support.

Built-in container security from code to production

In an era where software supply chain security is essential, Docker has raised the bar on container security. With integrated security measures across every phase of the development lifecycle, Docker helps teams build, test, and deploy confidently.

Proactive security insights with Docker Scout Health Scores

Docker Scout, launched in 2023,  has become a cornerstone of Docker’s security ecosystem, empowering developer teams to identify and address vulnerabilities in container images early in the development lifecycle. By integrating with Docker Hub, Docker Desktop, and CI/CD workflows, Scout ensures that security is seamlessly embedded into every build. 

Addressing vulnerabilities during the inner loop — the development phase — is estimated to be up to 100 times less costly than fixing them in production. This underscores the critical importance of early risk visibility and remediation for engineering teams striving to deliver secure, production-ready software efficiently.

In 2024, we announced Docker Scout Health Scores (Figure 3), a feature designed to better communicate the security posture of container images development teams use every day. Docker Scout Health Scores provide a clear, alphabetical grading system (A to F) that evaluates common vulnerabilities and exposures (CVEs) for software components within Docker Hub. This feature allows developers to quickly assess and wisely choose trusted content for a secure software supply chain. 

creenshot of Docker Scout health score page showing checks for high profile vulnerabilities, Supply chain attestations, unapproved images, outdated images, and more.
Figure 3: Docker Scout health score.

For a deeper dive, check out our blog post on enhancing container security with Docker Scout and secure repositories.

Air-gapped containers: Enhanced security for isolated environments

Docker introduced support for air-gapped containers in Docker Desktop 4.31, addressing the unique needs of highly secure, offline environments. Air-gapped containers enable developers to build, run, and test containerized applications without requiring an active internet connection. 

This feature is crucial for organizations operating in industries with stringent compliance and security requirements, such as government, healthcare, and finance. By allowing developers to securely transfer container images and dependencies to air-gapped systems, Docker simplifies workflows and ensures that even isolated environments benefit from the power of containerization.

Strengthening trust with SOC 2 Type 2 and ISO 27001 certifications

Docker also achieved two major milestones in its commitment to security and reliability: SOC 2 Type 2 attestation and ISO 27001 certification. These globally recognized standards validate Docker’s dedication to safeguarding customer data, maintaining robust operational controls, and adhering to stringent security practices. SOC 2 Type 2 attestation focuses on the effective implementation of security, availability, and confidentiality controls, while ISO 27001 certification ensures compliance with best practices for managing information security systems.

These certifications provide developers and organizations with increased confidence in Docker’s ability to support secure software supply chains and protect sensitive information. They also demonstrate Docker’s focus on aligning its services with the needs of modern enterprises.

Accelerating success for development teams and organizations

In 2024, Docker introduced a range of features and enhancements designed to empower development teams and streamline operations across organizations. From harnessing the potential of AI to simplifying deployment workflows and improving security, Docker’s advancements are focused on enabling teams to work smarter and build with confidence. By addressing key challenges in development, management, and security, Docker continues to drive meaningful outcomes for developers and businesses alike.

Docker Home: A central hub to access and manage Docker products

Docker introduced Docker Home (Figure 4), a central hub for users to access Docker products, manage subscriptions, adjust settings, and find resources — all in one place. This approach simplifies navigation for developers and admins. Docker Home allows admins to manage organizations, users, and onboarding processes, with access to dashboards for monitoring Docker usage.

Future updates will add personalized features for different roles, and business subscribers will gain access to tools like the Docker Support portal and organization-wide notifications.

Screenshot of Docker Home showing options to explore Docker products, Admin console, and more.
Figure 4: Docker Home.

Empowering AI innovation  

Docker’s ecosystem supports AI/ML workflows, helping developers work with these cutting-edge technologies while staying cloud-native and agile. Read the Docker Labs GenAI series to see how we’re innovating and experimenting in the open.

Through partnerships like those with NVIDIA and GitHub, Docker ensures seamless integration of AI tools, allowing teams to rapidly experiment, deploy, and iterate. This emphasis on enabling advanced tech aligns Docker with organizations looking to leverage AI and ML in containerized environments.

Optimizing AI application development with Docker Desktop and NVIDIA AI Workbench

Docker and NVIDIA partnered to integrate Docker Desktop with NVIDIA AI Workbench, streamlining AI development workflows. This collaboration simplifies setup by automatically installing Docker Desktop when selected as the container runtime in AI Workbench, allowing developers to focus on creating, testing, and deploying AI models without configuration hassles. By combining Docker’s containerization capabilities with NVIDIA’s advanced AI tools, this integration provides a seamless platform for model training and deployment, enhancing productivity and accelerating innovation in AI application development. 

Docker + GitHub Copilot: AI-powered developer productivity

We announced that Docker joined GitHub’s Partner Program and unveiled the Docker extension for GitHub Copilot (@docker). This extension is designed to assist developers in working with Docker directly within their GitHub workflows. This integration extends GitHub Copilot’s technology, enabling developers to generate Docker assets, learn about containerization, and analyze project vulnerabilities using Docker Scout, all from within the GitHub environment.

Accelerating AI development with the Docker AI catalog

Docker launched the AI Catalog, a curated collection of generative AI images and tools designed to simplify and accelerate AI application development. This catalog offers developers access to powerful models like IBM Granite, Llama, Mistral, Phi 2, and SolarLLM, as well as applications such as JupyterHub and H2O.ai. By providing essential tools for machine learning, model deployment, inference optimization, orchestration, ML frameworks, and databases, the AI Catalog enables developers to build and deploy AI solutions more efficiently. 

The Docker AI Catalog addresses common challenges in AI development, such as decision overload from the vast array of tools and frameworks, steep learning curves, and complex configurations. By offering a curated list of trusted content and container images, Docker simplifies the decision-making process, allowing developers to focus on innovation rather than setup. This initiative underscores Docker’s commitment to empowering developers and publishers in the AI space, fostering a more streamlined and productive development environment. 

Streamlining enterprise administration 

Simplified deployment and management with Docker’s MSI and PKG installers

Docker simplifies deploying and managing Docker Desktop with the new MSI Installer for Windows and PKG Installer for macOS. The MSI Installer enables silent installations, automated updates, and login enforcement, streamlining workflows for IT admins. Similarly, the PKG Installer offers macOS users easy deployment and management with standard tools. These installers enhance efficiency, making it easier for organizations to equip teams and maintain secure, compliant environments.

These new installers also align with Docker’s commitment to simplifying the developer experience and improving organizational management. Whether you’re setting up a few machines or deploying Docker Desktop across an entire enterprise, these tools provide a reliable and efficient way to keep teams equipped and ready to build.

New sign-in enforcement options enhance security and help streamline IT administration 

Docker simplifies IT administration and strengthens organizational security with new sign-in enforcement options for Docker Desktop. These features allow organizations to ensure users are signed in while using Docker, aligning local software with modern security standards. With flexible deployment options — including macOS Config Profiles, Windows Registry Keys, and the cross-platform registry.json file — IT administrators can easily enforce policies that prevent tampering and enhance security. These tools empower organizations to manage development environments more effectively, providing a secure foundation for teams to build confidently.

Desktop Insights: Unlocking performance and usage analytics

Docker introduced Desktop Insights, a powerful feature that provides developers and teams with actionable analytics to optimize their use of Docker Desktop. Accessible through the Docker Dashboard, Desktop Insights offers a detailed view of resource usage, build times, and performance metrics, helping users identify inefficiencies and fine-tune their workflows (Figure 5).

Whether you’re tracking the speed of container builds or understanding how resources like CPU and memory are being utilized, Desktop Insights empowers developers to make data-driven decisions. By bringing transparency to local development environments, this feature aligns with Docker’s mission to streamline container workflows and ensure developers have the tools to build faster and more effectively.

Screenshot of Docker Insights within Admin console, showing data for Total active users, Users with license, Total Builds, Total Containers run, and more
Figure 5: Desktop Insights dashboard.

New usage dashboards in Docker Hub

Docker introduced Usage dashboards in Docker Hub, giving organizations greater visibility into how they consume resources. These dashboards provide detailed insights into storage and image pull activity, helping teams understand their usage patterns at a granular level (Figure 6). 

By breaking down data by repository, tag, and even IP address, the dashboards make it easy to identify high-traffic images or repositories that might require optimization. With this added transparency, teams can better manage their storage, avoid unnecessary pull requests, and optimize workflows to control costs. 

Usage dashboards enhance accountability and empower organizations to fine-tune their Docker Hub usage, ensuring resources are used efficiently and effectively across all projects.

Screenshot of Docker Usage dashboard showing a graph of daily pulls over time.
Figure 6: Usage dashboard.

Enhancing security with organization access tokens

Docker introduced organization access tokens, which let teams manage access to Docker Hub repositories at an organizational level. Unlike personal access tokens tied to individual users, these tokens are associated with the organization itself, allowing for centralized control and reducing reliance on individual accounts. This approach enhances security by enabling fine-grained permissions and simplifying the management of automated processes and CI/CD pipelines. 

Organization access tokens offer several advantages, including the ability to set specific access permissions for each token, such as read or write access to selected repositories. They also support expiration dates, aligning with compliance requirements and bolstering security. By providing visibility into token usage and centralizing management within the Admin Console, these tokens streamline operations and improve governance for organizations of all sizes. 

Docker’s vision for 2025

Docker’s journey doesn’t end here. In 2025, Docker remains committed to expanding its support for cloud-native and AI/ML development, reinforcing its position as the go-to container platform. New integrations and expanded multi-cloud capabilities are on the horizon, promising a more connected and versatile Docker ecosystem.

As Docker continues to build for the future, we’re committed to empowering developers, supporting the open source community, and driving efficiency in software development at scale. 

2024 was a year of transformation for Docker and the developer community. With major advances in our product suite, continued focus on security, and streamlined experiences that deliver value, Docker is ready to help developer teams and organizations succeed in an evolving tech landscape. As we head into 2025, we invite you to explore Docker’s suite of tools and see how Docker can help your team build, innovate, and secure software faster than ever.

Learn more

]]>
How to Create and Use an AI Git Agent https://www.docker.com/blog/how-to-create-and-use-an-ai-git-agent/ Mon, 16 Dec 2024 14:23:50 +0000 https://www.docker.com/?p=66249 This ongoing Docker Labs GenAI series explores the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing software as open source so you can play, explore, and hack with us, too.

In our past experiments, we started our work from the assumption that we had a project ready to work on. That means someone like a UI tech writer would need to understand Git operations in order to use the tools we built for them. Naturally, because we have been touching on Git so frequently, we wanted to try getting a Git agent started. Then, we want to use this Git agent to understand PR branches for a variety of user personas — without anyone needing to know the ins and outs of Git.

2400x1260 docker labs genai

Git as an agent

We are exploring the idea that tools are agents. So, what would a Git agent do? 

Let’s tackle our UI use case prompt. 

Previously:

You are at $PWD of /project, which is a git repo.
Force checkout {{branch}}
Run a three-dot diff of the files changed in {{branch}} compared to main using --name-only.

A drawback that isn’t shown here, is that there is no authentication. So, if you haven’t fetched that branch or pulled commits already, this prompt at best will be unreliable and more than likely will fail (Figure 1):

Screenshot of Logs showing failure to authenticate.
Figure 1: No authentication occurs.

Now:

You are a helpful assistant that checks a PR for user-facing changes.
1. Fetch everything and get on latest main.
2. Checkout the PR branch and pull latest.
3. Run a three-dot git diff against main for just files. Write the output to /thread/diff.txt.

This time around, you can see that we are being less explicit about the Git operations, we have the ability to export outputs to the conversation thread and, most importantly, we have authentication with a new prompt!

Preparing GitHub authentication

Note: These prompts should be easily adaptable to other Git providers, but we use GitHub at Docker.

Before we can do anything with GitHub, we have to authenticate. There are several ways to do this, but for this post we’ll focus on SSH-based auth rather than using HTTPS through the CLI. Without getting too deep into the Git world, we will be authenticating with keys on our machine that are associated with our account. These keys and configurations are commonly located at ~/.ssh on Linux/Mac. Furthermore, users commonly maintain Git config at ~/.gitconfig

The .gitconfig file is particularly useful because it lets us specify carriage return rules — something that can easily cause Git to fail when running in a Linux container. We will also need to modify our SSH config to remove UseKeychain. We found these changes are enough to authenticate using SSH in Alpine/Git. But we, of course, don’t want to modify any host configuration.

We came up with a fairly simple flow that lets us prepare to use Git in a container without messing with any host SSH configs.

  1. Readonly mounts: Git config and SSH keys are stored on specific folders on the host machine. We need to mount those in.
    a. Mount ~/.ssh into a container as /root/.ssh-base readonly.
    b. Mount ~/.gitconfig into the same container as /root/.gitconfig.
  2. Copy /root/.ssh-base to /root/.ssh and make the new file readwrite.
  3. Make necessary changes to config.
  4. For the LLM, we also need it to verify the config is in the thread and the changes were made to it. In the event that it fails to make the right changes, the LLM can self-correct.
  5. Copy the .ssh directory and .gitconfig to /thread.

All of this is baked in a prompt you can find and run on GitHub. Simply replace <username> with your system username (Figure 2).

Screenshot of mounts, showing storage of Git config and SSH keys.
Figure 2: Readonly mounts.

If you’re using the default behavior in our extension for VSCode, then the /thread/ volume is ephemeral, so you would need to make sure your agent runs it before any Git tools. However, if you use the Set prompt thread ID command in VSCode, you can keep the /thread volume around under your own tag (Figure 3). 

Tagging the thread persists everything after the prompt completes, so you would only need to run the prompt once before going on to develop your Git tools.

Screenshot of setting the prompt thread ID.
Figure 3: Set the prompt thread ID.

If you opt to persist the thread, then this prompt would only need to be run once and the Git credentials will stick around (Figure 4).

Screenshot showing the stored configuration info.
Figure 4: Stored credential data.

Now that we have our credentials ready, we need our Git tool to pick up on them.

container:
   image: alpine/git
   entrypoint:
     - /bin/sh
     - -c
   command:
     - "cp -r /thread/.ssh /root/.ssh && cp -r /thread/.gitconfig /root/.gitconfig && git --no-pager {{args|safe}}" 

Our container definition copies the SSH credentials and .gitconfig files to the correct directory before running Git commands.

Unpacking objects:  90% (9/10)
Unpacking objects: 100% (10/10)
Unpacking objects: 100% (10/10), 4.14 KiB | 141.00 KiB/s, done.
From github.com:docker/repo
   abcd123..abcd123  main           -> origin/main
 * [new tag]           branch/v2982.0.0 -> branch/v2982.0.0
# ROLE assistant
I have fetched all origin branches. The latest updates have been applied to your local repository, and any new tags have been downloaded.
```
Now, we can see our Git tool is authenticated as if it were us.
Running the agent now, it can:
## ROLE assistant
I have run the git fetch --all command, which retrieves updates from all remotes. If you need further assistance with your Git repository, feel free to ask!
```

# ROLE assistant
```json
{"request":"checkout main"}
```

## ROLE assistant
```json
{"args":"checkout main"}
```

## ROLE assistant
You are already on the 'main' branch, and it is up to date with 'origin/main'.
```

# ROLE assistant
```json
{"request":"checkout the branch branch and pull latest changes."}
```

...
## ROLE assistant
The branch branch has been successfully checked out, and it's already up to date with the latest changes.
...

# ROLE assistant
```json
{"request":"Run a three-dot git diff against main and write the output for just files to /thread/diff.txt."}
```

...

## ROLE assistant
```json
{"args":"diff main... --name-only","outfile":"/thread/diff.txt"}
```

*entering tool*

## ROLE tool (git)
file1.tsx
file2.tsx
...
## ROLE assistant

The three-dot Git diff against the main branch has been executed, and the output has been saved to /thread/diff.txt. This includes the names of files that have differences.
``

With the new Git agent, we have enabled our assistant to fetch new information from upstream, switch branches, and pull in new code to review. If we provide our agent with credentials that are limited to read-only access, this agent could be used by someone that does not have a local git install.

For more on what we’re doing at Docker, subscribe to our newsletter.

Learn more

]]>
How AI Assistants Can Decode GitHub Repos for UI Writers https://www.docker.com/blog/ai-for-ui-writers/ Mon, 16 Dec 2024 14:19:07 +0000 https://www.docker.com/?p=65703 This ongoing Docker Labs GenAI series explores the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing software as open source so you can play, explore, and hack with us, too.

Can an AI-powered assistant understand a GitHub repo enough to answer questions for UI writers?

2400x1260 docker labs genai

Across many projects, user-facing content is rendered based on some sort of client-side code. Whether a website, a game, or a mobile app, it’s critical to nail the text copy displayed to the user.

So let’s take a sample question: Do any open PRs in this project need to be reviewed for UI copy? In other words, we want to scan a GitHub repo’s PRs and gain intelligence about the changes included.

Disclaimer: The best practice to accomplish this at a mature organization would be to implement Localization (i18n), which would facilitate centralized user-facing text. However, in a world of AI-powered tools, we believe our assistants will help minimize friction for all projects, not just ones that have adopted i18n.

So, let’s start off by seeing what options we already have.

The first instinct someone might have is to open the new copilot friend in the GitHub nav

genai series 13 f1
Figure 1: Type / to search.

We tried to get it to answer basic questions, first: “How many PR’s are open?”

genai series 13 f2
Figure 2: How many PR’s are there open? The answer doesn’t give a number.

Despite having access to the GitHub repo, the Copilot agent provides less helpful information than we might expect.

genai series 13 f3
Figure 3: Copilot is powered by AI, so mistakes are possible.

We don’t even get a number like we asked, despite GitHub surfacing that information on the repository’s main page. Following up our first query with the main query we want to ask effectively just gives us the same answer

genai series 13 f4
Figure 4: The third PR is filesharing: add some missing contexts.

And, after inspecting the third PR in the list, it doesn’t contain user-facing changes. One great indicator for this web project is the lack of any clientside code being modified. This was a backend change so we didn’t want to see this one.

genai series 13 f5
Figure 5: The PR doesn’t contain user-facing changes.

So let’s try to improve this:

First prompt file

---
functions:
  - name: bash
	description: Run a bash script in the utilities container.
	parameters:
  	  type: object
  	  properties:
    	    command:
      	      type: string
      	description: The command to send to bash
	container:
    	  image: wbitt/network-multitool  
    	  command:
      	    - "bash"
      	    - "-c"
      	    - "{{command|safe}}"
  - name: git
	description: Run a git command.
	parameters:
  	  type: object
  	  properties:
    	    command:
      	      type: string
      	description: The git command to run, excluding the `git` command itself
	container:
  	  image: alpine/git
  	  entrypoint:
    	    - "/bin/sh"
  	  command:
    	    - "-c"
    	    - "git --no-pager {{command|safe}}"
---

# prompt system

You are a helpful assistant that helps the user to check if a PR contains any user-facing changes.

You are given a container to run bash in with the following tools:

  curl, wget, jq
and default alpine linux tools too.

# prompt user
You are at $PWD of /project, which is a git repo.

Checkout branch `{{branch}}`.

Diff the changes and report any containing user facing changes

This prompt was promising, but it ended up with a few blocking flaws. The reason is that using git to compare files is quite tricky for an LLM.

  • git diff uses a pager, and therefore needs the --no-pager arg to send stdout to the conversation.
  • The total number of files affected via git diff can be quite large.
  • Given each file, the raw diff output can be massive and difficult to parse.
  • The important files changed in a PR might be buried with many extra files in the diff output.
  • The container has many more tools than necessary, allowing the LLM to hallucinate.

The agent needs some understanding of the repo to determine the sorts of files that contain user-facing changes, and it needs to be capable of seeing just the important pieces of information.

Our next pass involves a few tweaks:

  • Switch to alpine git image and a file writer as the only tools necessary.
  • Use –files-only and –no-pager args.
# ROLE assistant


The following files are likely to contain user-facing changes as they mainly consist of UI components, hooks, and API functionalities.

```
file1.ts
fil2.tsx
file3.tsx
...
```
Remember that this isn't a guarantee of whether there are user-facing changes, but just an indication of where they might be if there are any.

Remember that this isn’t a guarantee of whether there are user-facing changes, but just an indication of where they might be if there are any.

Giving the agent the tool run-javascript-sandbox allowed our agent to write a script to save the output for later.

genai series 13 f6
Figure 6: Folder called user-changes with files.txt.

To check out the final prompt here, use our Gist.

Expert knowledge

This is a great start; however, we now need to inspect the files themselves for user-facing changes. When we started this, we realized that user-facing changes could manifest in a diverse set of “diff”s so we needed to include expert knowledge. We synced up with Mark Higson, a staff SWE currently working on the frontend platform here at Docker. Mark was able to help provide some key advice for what “user-facing” changes look like in many repos at Docker, so I baked the tips into the prompt.

Straightforward approaches

Looking for changes in text nodes found in a JSX tree is the easiest example.

JSX node with interpolation

<div>{functionReturningString()}</div>

If the result is a string, the result is probably user-facing, but the components that create the string could be elsewhere, so look for:

Nuanced indicators

  • Standard user-facing components. Example: notifications. If a notification’s props change, we can likely infer that it is a user-facing change.
  • Constructors for commonly used components. Example: errors. If an Error() is constructed with a different argument, we know that error could show up differently.

The key for UI reviewers is the overall amount of text is changed, rather than layout.

So, despite not being able to catch everything that could be rendered, we will be able to deliver value by focusing on these areas.

Diffing the files

There are a few approaches to finding changes in the files. To start, we’ll use git again.

Before including expert advice, git diff was hallucinating changes that weren’t actually represented in the output. After the expert advice, we are seeing much more focused outputs from the LLM.

Our second prompt just needs to read the user-facing files that we already saved, and then compare them to main. After a bit of trial and error with git, we found that a combination of --no-color and --minimal was able to give the LLM enough context without consuming too many tokens for each change.

There’s a lot of buzz around generating and reading diffs with LLMs. The teams at Aider and Cursor are both focusing on fine-tuning LLM’s using speculative decoding:

For our use cases, however, we found that the average “big” PR at Docker was not a problem for traditional LLMs when given good advice and the right tools.

So here’s the second prompt:

## User-facing content

The following pieces of code are what we want to focus on for user-facing content:

Text Nodes in any JSX or TSX files.

Text Nodes in any HTML files.

Standard user-facing components. Example: Definitions/Constructors for notifications, toasts, and errors.

# prompt user

Can you tell me if there are any user-facing changes in {{branch}}?

To check out the full prompt here, use our Gist.

Output

Note: Some full paths, branches, and diffs are trimmed or replaced with placeholders for this article.

# ROLE assistant

```json
{
  "args": ["checkout", "branch"]
}
```

# ROLE tool (git)


Already on 'branch'
Your branch is up to date with 'origin/branch'.

# ROLE assistant

```json
{
  "path": "/thread/user-changes/files.txt"
}
```

*entering tool*

# ROLE tool (read-file)

path1
path2
path3
...rest of paths

# ROLE assistant

```json
{
  "args": ["diff", "main", "--no-color", "--minimal", path1 path2 path3...]
}
```

*entering tool*

# ROLE tool (git)

<Diff output truncated>
# ROLE assistant

The following user-facing changes were detected between the "main" branch and the "branch" branch:

1. In `File1.tsx` file, the text "Sign In" was modified to "Sign in".

2. In `File2.ts` file, titles "Databases & Storage", "Monitoring & Observability", "Developer Tools", "New Extensions", were changed to "Databases & storage", "Monitoring & observability", "Developer tools", "New extensions" respectively.

3. In `File3.rtl.test.tsx.snap` and `ImageHeader/index.tsx files`, the text "Compressed Size " was changed to "Compressed size".

4. In `File4.rtl.test.tsx` and `ImageDetail/index.tsx files`, a test-case previously checking for the existence of "Compressed Size" now looks for "Compressed size".

5. In `File5.tsx file`, the button text "Add Collaborator" has been changed to "Add collaborator".

6. In `route.tsx file`, the collaborators table title "Current Collaborators" was changed to "Current collaborators".

7. In `File6.tsx` file, the description was updated.

8. In `File7.tsx`, `RepositoryOverview.tsx`, `RepositoryReadMe.tsx`, `TagsAndScans.tsx` files, the description when there is no category or description was updated.

9. In `File8.rtl.test.tsx`, `DeleteRepo.tsx`, the text for confirming repo deletion and the delete button text was adjusted.

10. In `route.tsx file`, the filter label "Trusted Content" was changed to "Trusted content".

11. In `File9.tsx`, the text "Your Organizations" has been changed to "Your organizations".

12. In `Component/index.tsx`, the search group label "Trusted Content" has been changed to "Trusted content".

13. In `file10.test.ts` file, a test-case was updated to check for the new text "Sign in".
```

Overall, this was a positive result:

genai series 13 f7
Figure 7: Mark’s response “Damn, that’s nice output (assuming it’s accurate).”

Follow up:

genai series 13 f8
Figure 8: Mark’s response, “I checked, it’s accurate.”

Another example from a different PR:

The changes in the `branch` branch compared to `main` include several modifications that affect user-facing content, particularly in JSX and TSX files. Here is a summary of the key changes:

...

These changes primarily involve simplification and refactoring of the existing UI logic, possibly to streamline the user interaction by removing complexities related to feature flags and reducing the use of modals or conditional rendering for specific purchasing flows.

Try it yourself

Here is a markdown file that you can paste into VSCode to try these prompts on your own branch. In the last line, update my-branch to one of your local branches that you’d like to review: https://gist.github.com/ColinMcNeil/2e8f25e2d4092f3c7a0ce8992d2e197c#file-readme-md

Next steps

This is already a promising flow. For example, a tech writer could clone the git repo and run this prompt to inspect a branch for user-facing changes. From here, we might extend the functionality:

  • Allow user input for PR to review without knowing the branch or git needing to use git.
  • Automatic git clone & pull with auth.
  • Support for larger >15 files changed PR by allowing agents to automate their tasks.
  • “Baking” the final flow into CI/CD so that it can automatically assign reviewers to relevant PRs.

If you’re interested in running this prompt on your own repo or just want to follow along with the code, watch our new public repo and reach out. We also appreciate your GitHub Stars.

Everything we’ve discussed in this blog post is available for you to try out on your own projects. 

For more on what we’re doing at Docker, subscribe to our newsletter.

Learn more

]]>
From Legacy to Cloud-Native: How Docker Simplifies Complexity and Boosts Developer Productivity https://www.docker.com/blog/from-legacy-to-cloud-native-how-docker-simplifies-complexity-and-boosts-developer-productivity/ Fri, 13 Dec 2024 13:30:55 +0000 https://www.docker.com/?p=66183 Modern application development has evolved dramatically. Gone are the days when a couple of developers, a few machines, and some pizza were enough to launch an app. As the industry grew, DevOps revolutionized collaboration, and Docker popularized containerization, simplifying workflows and accelerating delivery. 

Later, DevSecOps brought security into the mix. Fast forward to today, and the demand for software has never been greater, with more than 750 million cloud-native apps expected by 2025.

This explosion in demand has created a new challenge: complexity. Applications now span multiple programming languages, frameworks, and architectures, integrating both legacy and modern systems. Development workflows must navigate hybrid environments — local, cloud, and everything in between. This complexity makes it harder for companies to deliver innovation on time and stay competitive. 

2400x1260 evergreen docker blog e

To overcome these challenges, you need a development platform that’s as reliable and ubiquitous as electricity or Wi-Fi — a platform that works consistently across diverse applications, development tools, and environments. Whether you’re just starting to move toward microservices or fully embracing cloud-native development, Docker meets your team where they are, integrates seamlessly into existing workflows, and scales to meet the needs of individual developers, teams, and entire enterprises.

Docker: Simplifying the complex

The Docker suite of products provides the tools you need to accelerate development, modernize legacy applications, and empower your team to work efficiently and securely. With Docker, you can:

  • Modernize legacy applications: Docker makes it easy to containerize existing systems, bringing them closer to modern technology stacks without disrupting operations.
  • Boost productivity for cloud-native teams: Docker ensures consistent environments, integrates with CI/CD workflows, supports hybrid development environments, and enhances collaboration

Consistent environments: Build once, run anywhere

Docker ensures consistency across development, testing, and production environments, eliminating the dreaded “works on my machine” problem. With Docker, your team can build applications in unified environments — whether on macOS, Windows, or Linux — for reliable code, better collaboration, and faster time to market.

With Docker Desktop, developers have a powerful GUI and CLI for managing containers locally. Integration with popular IDEs like Visual Studio Code allows developers to code, build, and debug within familiar tools. Built-in Kubernetes support enables teams to test and deploy applications on a local Kubernetes cluster, giving developers confidence that their code will perform in production as expected.

Integrated workflows for hybrid environments

Development today spans both local and cloud environments. Docker bridges the gap and provides flexibility with solutions like Docker Build Cloud, which speeds up build pipelines by up to 39x using cloud-based, multi-platform builders. This allows developers to focus more on coding and innovation, rather than waiting on builds.

Docker also integrates seamlessly with CI/CD tools like Jenkins, GitLab CI, and GitHub Actions. This automation reduces manual intervention, enabling consistent and reliable deployments. Whether you’re building in the cloud or locally, Docker ensures flexibility and productivity at every stage.

Team collaboration: Better together

Collaboration is central to Docker. With integrations like Docker Hub and other registries, teams can easily share container images and work together on builds. Docker Desktop features like Docker Debug and the Builds view dashboards empower developers to troubleshoot issues together, speeding up resolution and boosting team efficiency.

Docker Scout provides actionable security insights, helping teams identify and resolve vulnerabilities early in the development process. With these tools, Docker fosters a collaborative environment where teams can innovate faster and more securely.

Why Docker?

In today’s fast-paced development landscape, complexity can slow you down. Docker’s unified platform reduces complexity as it simplifies workflows, standardizes environments, and empowers teams to deliver software faster and more securely. Whether you’re modernizing legacy applications, bridging local and cloud environments, or building cutting-edge, cloud-native apps, Docker helps you achieve efficiency and scale at every stage of the development lifecycle.

Docker offers a unified platform that combines industry-leading tools — Docker Desktop, Docker Hub, Docker Build Cloud, Docker Scout, and Testcontainers Cloud — into a seamless experience. Docker’s flexible plans ensure there’s a solution for every developer and every team, from individual contributors to large enterprises.

Get started today

Ready to simplify your development workflows? Start your Docker journey now and equip your team with the tools they need to innovate, collaborate, and deliver with confidence.

Looking for tips and tricks? Subscribe to Docker Navigator for the latest updates and insights delivered straight to your inbox.

Learn more

]]>
Tackle These Key Software Engineering Challenges to Boost Efficiency with Docker https://www.docker.com/blog/tackle-software-engineering-challenges-to-boost-efficiency/ Fri, 13 Dec 2024 13:26:27 +0000 https://www.docker.com/?p=66294 Software engineering is a dynamic, high-pressure field where development teams encounter a variety of challenges every day. As software development projects become increasingly complex, engineers must maintain high-quality code, meet time constraints, collaborate effectively, and prevent security vulnerabilities. At the same time, development teams can be held back by inefficiencies that can hinder productivity and speed.

Let’s explore some of the most common software engineering challenges and how Docker’s tools streamline the inner loop of cloud-native workflows. These tools help developers overcome pain points, boost productivity, and deliver better software faster.

2400x1260 docker evergreen logo blog B

Top 4 software engineering challenges developers face

Let’s be real — software development teams face a laundry list of challenges. From managing dependencies across teams to keeping up with the latest threats in an increasingly complex software ecosystem, these obstacles can quickly become roadblocks that stifle progress. Let’s dive into some of the most significant software engineering challenges that developers face today and how Docker can help:

1. Dependency management

One of the most common pain points in software engineering is managing dependencies. In any large development project, multiple teams might work on different parts of the codebase, often relying on various third-party libraries and services. The complexity increases when these dependencies span across different environments and versions.

The result? Version conflicts, broken builds, deployment failures, and hours spent troubleshooting. This process can become even more cumbersome when working with legacy code or when different teams work with conflicting versions.

Containerize your applications with their dependencies

Docker allows developers to package all their apps and dependencies into neat, lightweight containers. Think of these containers as “time capsules” that hold everything your app needs to run smoothly, from libraries and tools to configurations. And because these containers are portable, you get the same app behavior on your laptop, your testing server, or in production — no more hoping that “it worked on my machine” when it’s go-time.

No more version conflict drama. No more hours spent trying to figure out which version of the library your coworker’s been using. Docker ensures that everyone on the team works with the same setup. Consistent environments, happy devs, and no more dependency issues!

2. Testing complexities

Testing presents another significant challenge for developers. In an ideal world, tests would run in an environment that perfectly mirrors production; however, this is rarely the case. Developers often encounter problems when testing code in isolated environments that don’t reflect real-world conditions. As a result, bugs that might have been caught early in development are only discovered later, leading to costly fixes and delays.

Moreover, when multiple developers work in different environments or use different tools, the quality of tests can be inconsistent, and issues might be missed altogether. This leads to inefficiencies and makes it harder to ensure that your software is functional and reliable.

Leverage cloud-native testing environments that match production

One of Docker’s most significant benefits is its ability to create cloud-native testing environments. With Testcontainers Cloud, you can integrate testing within containers to create consistent, reliable testing environments that scale by defining test dependencies as code with confidence that they match production. Testing ensures that bugs and issues are caught earlier in the development cycle, reducing the time spent on troubleshooting and improving the overall quality of the software. 

Docker Hub offers a repository of pre-configured images and environments, enabling developers to quickly share and collaborate on testing setups. This eliminates inconsistencies between test environments, ensuring all teams work with the same configurations and tools.

3. Lack of visibility and collaboration

Software development today often involves many developers working on different parts of a project simultaneously. This collaborative approach has obvious benefits, but can also lead to significant challenges. In a multi-developer environment, tracking changes, ensuring consistency, and maintaining smooth collaboration across teams can be hard.

Without proper visibility into the software development process, identifying issues in real-time and keeping everyone aligned becomes difficult. In many cases, teams end up working in silos, each using their own tools and systems. This lack of coherence can lead to misunderstandings, duplication of efforts, and delays in achieving milestones.

Accelerate teamwork with shared images, caches, and insights

Docker fosters collaboration by offering an integrated ecosystem where developers can seamlessly share images, cache, templates, and more. For example, Docker Hub and Hardened Docker Desktop allow teams to push, pull, and share secure images, making it easier to get started quickly using all the right configurations. Meanwhile, teams can also cut down on time-consuming builds and resolve failed builds with the Docker Build Cloud shared cache and Build insights.

Docker’s streamlined workflows provide greater visibility into the development process. With this improved collaboration and integrated workflows, software developers can enjoy faster development cycles and more time to innovate.

4. Security risks

Security is often a major concern in software development, yet it’s a challenge that many teams struggle to address consistently. Developers are constantly working under tight deadlines to release new features and fixes, which can sometimes push security considerations to the sidelines. As a result, vulnerabilities can be unintentionally introduced into the codebase through outdated libraries, insecure configurations, and even simple coding oversights.

The main challenge with security lies in identifying and managing risks across all development stages and environments. Developers must follow security protocols diligently and vulnerabilities need to be patched quickly, especially when building software for organizations with strict security regulations. This becomes increasingly difficult when multiple teams work on separate components, each potentially introducing its own security concerns.

Embed security into every phase of the development lifecycle

Docker solves these challenges by integrating security and compliance from build to production, without sacrificing speed or flexibility. For example, Docker Scout offers continuous vulnerability scanning and actionable insights, enabling teams to identify and address risks early. And with increased visibility into dependencies, images, and remediation recommendations, developers can be set up to prevent outdated libraries and insecure configurations from reaching production.

With tools like Hardened Docker Desktop, IAM, and RAM, Docker reduces the complexity of security oversight while ensuring compliance. These features help organizations avoid costly vulnerabilities, safeguard intellectual property, and maintain customer trust without slowing development speed. This simplified security management allows developers to deliver faster without compromising security.

Adopt Docker to overcome key challenges in software development

From dependency management to security risks, software developers face numerous challenges on their journey to deliver high-quality, secure applications. Docker’s unified development suite streamlines every stage of the inner loop, combining Docker Desktop, Docker Hub, Docker Build Cloud, Docker Scout, and Testcontainers Cloud into one powerful, cloud-native workflow ecosystem.

By streamlining workflows, enhancing collaboration, embedding security into every stage of development, and providing consistent testing environments, Docker empowers teams to build, test, and ship cloud-native applications with unparalleled speed and reliability. Whether you’re tackling legacy code or scaling modern applications, Docker ensures your development process remains efficient, secure, and ready for the demands of today’s fast-paced software landscape.

Docker’s subscription plans offer flexible, scalable access to a unified inner-loop suite, allowing teams of any size to accelerate workflows, ensure consistency, and build better software faster. It’s more than a set of tools — it offers a cohesive platform designed to transform your development lifecycle and keep your team competitive, efficient, and secure.

Ready to unlock your team’s full potential? Check out our white paper, Reducing Every-Day Complexities for More Efficient Software Development with Docker, to discover more about empowering developers to work more efficiently with simplified workflows, enhanced collaboration, and integrated security.

Explore the Docker suite of products to access the full power of the unified development suite and accelerate your team’s workflows today.

]]>