Docker https://www.docker.com Tue, 17 Sep 2024 13:46:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://www.docker.com/wp-content/uploads/2024/02/cropped-docker-logo-favicon-32x32.png Docker https://www.docker.com 32 32 Docker for Web Developers: Getting Started with the Basics https://www.docker.com/blog/docker-for-web-developers/ Tue, 17 Sep 2024 13:38:49 +0000 https://www.docker.com/?p=57734 Docker is known worldwide as a popular application containerization platform. But it also has a lesser-known and intriguing alter ego. It’s a popular go-to platform among web developers for its speed, flexibility, broad user base, and collaborative capabilities. 

Docker has been growing as a modern solution that brings innovation to web development using containerization. With containers, developers and web development projects can become more efficient, save time, and drive fresh creativity. Web developers use Docker for development because it ensures consistency across different environments, eliminating the “it works on my machine” problem. Docker also simplifies dependency management, enhances resource efficiency, supports scalable microservices architectures, and allows for rapid deployment and rollback, making it an indispensable tool for modern web development projects.

In this post, we dive into the benefits of using Docker in businesses from small to large, and review Docker’s broad capabilities, strengths, and features for bolstering web development and developer productivity. 

2400x1260 docker for web developers

What is Docker?

Docker is secure, out-of-the-box containerization software offering developers and teams a robust, hybrid toolkit to develop, test, monitor, ship, deploy, and run enterprise and web applications. Containerization lets developers separate their applications from infrastructure so they can run them without worrying about what is installed on the host, giving development teams flexibility and collaborative advantages over virtual machines, while delivering better source code faster. 

The Docker suite enables developers to package and run their application code in lightweight, local, standardized containers that have everything needed to run the application — including an operating system and required services. Docker allows developers to run many containers simultaneously on a host, while also allowing the containers to be shared with others. By working within this collaborative workspace, productive and direct communications can thrive and development processes become easier, more accurate, and more secure. Many of the components in Docker are open source, including Docker Compose, BuildKit, the Docker command-line interface (Docker CLI), containerd, and more. 

As the #1 containerization software for developers and teams, Docker is well-suited for all flavors of development. Highlights include: 

  • Docker Hub: The world’s largest repository of container images, which helps developers and open source contributors find, use, and share their Docker-inspired container images.
  • Docker Compose: A tool for defining and running multi-container applications.
  • Docker Engine: An open source containerization technology for building and containerizing applications.
  • Docker Desktop: Includes the Docker Engine and other open source components; proprietary components; and features such as an intuitive GUI, synchronized file shares, access to cloud resources, debugging features, native host integration, governance, and security features that support Enhanced Container Isolation (ECI), air-gapped containers, and administrative settings management.
  • Docker Build Cloud: A Docker service that lets developers build their container images on a cloud infrastructure that ensures fast builds anywhere for all team members. 

What is a container?

Containers are lightweight, standalone, executable packages of software that include everything needed to run an application: code, runtime, libraries, environment variables, and configuration files. Containers are isolated from each other and can be connected to networks or storage and can be used to create new images based on their current states. 

Docker containers are faster and more efficient for software creation than virtualization, which uses a resource-heavy software abstraction layer on top of computer hardware. Additionally, Docker containers require fewer physical hardware resources than virtual machines and communicate with their host systems through well-defined channels.

Why use Docker for web applications?

Docker is a popular choice for developers building enterprise applications for various reasons, including consistent environments, efficient resource usage, speed, container isolation, scalability, flexibility, and portability. And, Docker is popular for web development for these same reasons. 

Consistent environments

Using Docker containers, web developers can build web applications that provide consistent environments from their development all the way through to production. By including all the components needed to run an application within an isolated container, Docker addresses those issues by allowing developers to produce and package their containers and then run them through various development, testing, and production environments to ensure their quality, security, and performance. This approach helps developers prevent the common and frustrating “but it works on my machine” conundrum, assuring that the code will run and perform well anywhere, from development through deployment.

Efficiency in using resources

With its lightweight architecture, Docker uses system resources more efficiently than virtual machines, allowing developers to run more applications on the same hardware. Docker containers allow multiple containers to run on a single host and gain resource efficiency due to the isolation and allocation features that containers incorporate. Additionally, containers require less memory and disk space to perform their tasks, saving on hardware costs and making resource management easier. Docker also saves development time by allowing container images to be reused as needed. 

Speed

Docker’s design and components also give developers significant speed advantages in setting up and tearing down container environments, allowing needed processes to be completed in seconds due to its lightweight and flexible application architecture. This allows developers to rapidly iterate their containerized applications, increasing their productivity for writing, building, testing, monitoring, and deploying their creations.  

Isolation

Docker’s application isolation capabilities provide huge benefits for developers, allowing them to write code and build their containers and applications simultaneously, with changes made in one not affecting the others. For developers, these capabilities allow them to find and isolate any bad code before using it elsewhere, improving security and manageability.

Scalability, flexibility, and portability

Docker’s flexible platform design also gives developers broad capabilities to easily scale applications up or down based on demand, while also allowing them to be deployed across different servers. These features give developers the ability to manage different workloads and system resources as needed. And, its portability features mean that developers can create their applications once and then use them in any environment, further ensuring their reliability and proper operation through the development cycle to production.

How web developers use Docker

There is a wide range of Docker use cases for today’s web developers, including its flexibility as a local development environment that can be quickly set up to match desired production environments; as an important partner for microservices architectures, where each service can be developed, tested, and deployed independently; or as an integral component in continuous integration and continuous deployment (CI/CD) pipelines for automated testing and deployment.

Other important Docker use cases include the availability of a strong and knowledgeable user community to help drive developer experiences and skills around containerization; its importance and suitability for vital cross-platform production and testing; and deep resources and availability for container images that are usable for a wide range of application needs. 

Get started with Docker for web development (in 6 steps)

So, you want to get a Docker container up and running quickly? Let’s dive in using the Docker Desktop GUI. In this example, we will use the Docker version for Microsoft Windows, but there are also Docker versions for use on Mac and many flavors of Linux

Step 1: Install Docker Desktop

Start by downloading the installer from the docs or from the release notes.

Double-click Docker Desktop for Windows Installer.exe to run the installer. By default, Docker Desktop is installed at C:\Program Files\Docker\Docker.

When prompted, be sure to choose the WSL 2 option instead of the Hyper-V option on the configuration page, depending on your choice of backend. If your system only supports one of the two options, you will not be able to select which backend to use.

Follow the instructions on the installation wizard to authorize the installer and proceed with the installation. When the installation is successful, select Close to complete the installation process.

Step 2: Create a Dockerfile

A Dockerfile is a text-based file that contains a running script of instructions giving full details on how a developer wants to build their Docker container image. A Dockerfile, which uses no file extension, is built by creating a file named Dockerfile in the getting-started-app directory, which is also where the package.json file is found. 

A Dockerfile contains details about the container’s operating system, file locations, environment, dependencies, configuration, and more. Check out the useful Docker best practices documentation for creating quality Dockerfiles. 

Here is a basic Dockerfile example for setting up an Apache web server

Create a Dockerfile in your project:

FROM httpd:2.4
COPY ./public-html/ /usr/local/apache2/htdocs/

Next, run the commands to build and run the Docker image:

$ docker build -t my-apache2
$ docker run -dit --name my-running-app -p 8080:80 my-apache2

Visit http://localhost:8080 to see it working.

Step 3: Build your Docker image

The Dockerfile that was just created allows us to start building our first Docker container image. The docker build command initiated in the previous step started the new Docker image using the Dockerfile and related “context,” which is the set of files located in the specified PATH or URL. The build process can refer to any of the files in the context. Docker images begin with a base image that must be downloaded from a repository to start a new image project.

Step 4: Run your Docker container

To run a new container, start with the docker run command, which runs a command in a new container. The command pulls an image if needed and then starts the container. By default, when you create or run a container using docker create or docker run, the container does not expose any of its ports to the outside world. To make a port available to services outside of Docker you must use the --publish or -p flag commands. This creates a firewall rule in the host, mapping a container port to a port on the Docker host to the outside world. 

Step 5: Access your web application

How to access a web application that is running inside a Docker container.

To access a web application running inside a Docker container, you need to publish the container’s port to the host. This can be done using the docker run command with the --publish or -p flag. The format of the --publish command is [host_port]:[container_port].

Here is an example of how to run a container and publish its port using the Docker CLI:

$ docker run -d -p 8080:80 docker/welcome-to-docker

In this command, the first 8080 refers to the host port. This is the port on your local machine that will be used to access the application running inside the container. The second 80 refers to the container port. This is the port that the application inside the container listens on for incoming connections. Hence, the command binds to port 8080 of the host to port 80 on the container system.

After running the container with the published port, you can access the web application by opening a web browser and visiting http://localhost:8080.

You can also use Docker Compose to run the container and publish its port. Here is an example of a compose.yaml file that does this:

services:
 app:
   image: docker/welcome-to-docker
   ports:
     - 8080:80

After creating this file, you can start the application with the docker compose up command. Then, you can access the web application at http://localhost:8080.

Step 6: Make changes and update

Updating a Docker application in a container requires several steps. With the command-line interface use the docker stop command to stop the container, then the existing container can be removed by using the docker rm (remove) command. Next, a new updated container can be started by using a new docker run command with the updated container. The old container must be stopped before replacing it because the old container is already using the host’s port 3000. Only one process on the machine — including containers — can listen to a specific port at a time. Only after the old container is stopped can it be removed and replaced with a new one. 

Conclusion

In this blog post, we learned about how Docker brings valuable benefits to web developers to speed up and improve their operations and creativity, and we touched on how web developers can get started with the platform on Day One, including basic instructions on setting up Docker quickly to start using it for web development.

Docker delivers streamlined workflows for web development due to its lightweight architecture and broad collaboration, application design, scalability, and other benefits. Docker expands the capabilities of web application developers, giving them flexible tools for everything from building better code to testing, monitoring, and deploying reliable code more quickly. 

Subscribe to our newsletter to stay up-to-date about Docker and its latest uses and innovations. 

Learn more

]]>
Secure by Design for AI: Building Resilient Systems from the Ground Up https://www.docker.com/blog/secure-by-design-for-ai/ Mon, 16 Sep 2024 14:23:26 +0000 https://www.docker.com/?p=57802 As artificial intelligence (AI) has erupted, Secure by Design for AI has emerged as a critical paradigm. AI is integrating into every aspect of our lives — from healthcare and finance to developers to autonomous vehicles and smart cities — and its integration into critical infrastructure has necessitated that we move quickly to understand and combat threats. 

Necessity of Secure by Design for AI

AI’s rapid integration into critical infrastructure has accelerated the need to understand and combat potential threats. Security measures must be embedded into AI products from the beginning and evolve as the model evolves. This proactive approach ensures that AI systems are resilient against emerging threats and can adapt to new challenges as they arise. In this article, we will explore two polarizing examples — the developer industry and the healthcare industry.

Black padlock on light blue digital background

Complexities of threat modeling in AI

AI brings forth new challenges and conundrums when working on an accurate threat model. Before reaching a state in which the data has simple edit and validation checks that can be programmed systematically, AI validation checks need to learn with the system and focus on data manipulation, corruption, and extraction. 

  • Data poisoning: Data poisoning is a significant risk in AI, where the integrity of the data used by the system can be compromised. This can happen intentionally or unintentionally and can lead to severe consequences. For example, bias and discrimination in AI systems have already led to issues, such as the wrongful arrest of a man in Detroit due to a false facial recognition match. Such incidents highlight the importance of unbiased models and diverse data sets. Testing for bias and involving a diverse workforce in the development process are critical steps in mitigating these risks.

In healthcare, for example, bias may be simpler to detect. You can examine data fields based on areas such as gender, race, etc. 

In development tools, bias is less clear-cut. Bias could result from the underrepresentation of certain development languages, such as Clojure. Bias may even result from code samples based on regional differences in coding preferences and teachings. In developer tools, you likely won’t have the information available to detect this bias. IP addresses may give you information about where a person is living currently, but not about where they grew up or learned to code. Therefore, detecting bias will be more difficult. 

  • Data manipulation: Attackers can manipulate data sets with malicious intent, altering how AI systems behave. 
  • Privacy violations: Without proper data controls, personal or sensitive information could unintentionally be introduced into the system, potentially leading to privacy violations. Establishing strong data management practices to prevent such scenarios is crucial.
  • Evasion and abuse: Malicious actors may attempt to alter inputs to manipulate how an AI system responds, thereby compromising its integrity. There’s also the potential for AI systems to be abused in ways developers did not anticipate. For example, AI-driven impersonation scams have led to significant financial losses, such as the case where an employee transferred $26 million to scammers impersonating the company’s CFO.

These examples underscore the need for controls at various points in the AI data lifecycle to identify and mitigate “bad data” and ensure the security and reliability of AI systems.

Key areas for implementing Secure by Design in AI

To effectively secure AI systems, implementing controls in three major areas is essential (Figure 1):

Illustration showing flow of data from Users to Data Management to Model Tuning to Model Maintenance.
Figure 1: Key areas for implementing security controls.

1. Data management

The key to data management is to understand what data needs to be collected to train the model, to identify the sensitive data fields, and to prevent the collection of unnecessary data. Data management also involves ensuring you have the correct checks and balances to prevent the collection of unneeded data or bad data.

In healthcare, sensitive data fields are easy to identify. Doctors offices often collect national identifiers, such as drivers licenses, passports, and social security numbers. They also collect date of birth, race, and many other sensitive data fields. If the tool is aimed at helping doctors identify potential conditions faster based on symptoms, you would need anonymized data but would still need to collect certain factors such as age and race. You would not need to collect national identifiers.

In developer tools, sensitive data may not be as clearly defined. For example, an environment variable may be used to pass secrets or pass confidential information, such as the image name from the developer to the AI tool. There may be secrets in fields you would not suspect. Data management in this scenario involves blocking the collection of fields where sensitive data could exist and/or ensuring there are mechanisms to scrub sensitive data built into the tool so that data does not make it to the model. 

Data management should include the following:

  • Implementing checks for unexpected data: In healthcare, this process may involve “allow” lists for certain data fields to prevent collecting irrelevant or harmful information. In developer tools, it’s about ensuring the model isn’t trained on malicious code, such as unsanitized inputs that could introduce vulnerabilities.
  • Evaluating the legitimacy of users and their activities: In healthcare tools, this step could mean verifying that users are licensed professionals, while in developer tools, it might involve detecting and mitigating the impact of bot accounts or spam users.
  • Continuous data auditing: This process ensures that unexpected data is not collected and that the data checks are updated as needed. 

2. Alerting and monitoring 

With AI, alerting and monitoring is imperative to ensuring the health of the data model. Controls must be both adaptive and configurable to detect anomalous and malicious activities. As AI systems grow and adapt, so too must the controls. Establish thresholds for data, automate adjustments where possible, and conduct manual reviews where necessary.

In a healthcare AI tool, you might set a threshold before new data is surfaced to ensure its accuracy. For example, if patients begin reporting a new symptom that is believed to be associated with diabetes, you may not report this to doctors until it is reported by a certain percentage (15%) of total patients. 

In a developer tool, this might involve determining when new code should be incorporated into the model as a prompt for other users. The model would need to be able to log and analyze user queries and feedback, track unhandled or poorly handled requests, and detect new patterns in usage. Data should be analyzed for high frequencies of unhandled prompts, and alerts should be generated to ensure that additional data sets are reviewed and added to the model.

3. Model tuning and maintenance

Producers of AI tools should regularly review and adjust AI models to ensure they remain secure. This includes monitoring for unexpected data, adjusting algorithms as needed, and ensuring that sensitive data is scrubbed or redacted appropriately.

For healthcare, model tuning may be more intensive. Results may be compared to published medical studies to ensure that patient conditions are in line with other baselines established across the world. Audits should also be conducted to ensure that doctors with reported malpractice claims or doctors whose medical license has been revoked are scrubbed from the system to ensure that potentially compromised data sets are not influencing the model. 

In a developer tool, model tuning will look very different. You may look at hyperparameter optimization using techniques such as grid search, random search, and Bayesian search. You may study subsets of data; for example, you may perform regular reviews of the most recent data looking for new programming languages, frameworks, or coding practices. 

Model tuning and maintenance should include the following:

  • Perform data audits to ensure data integrity and that unnecessary data is not inadvertently being collected. 
  • Review whether “allow” lists and “deny” lists need to be updated.
  • Regularly audit and monitor alerts for algorithms to determine if adjustments need to be made; consider the population of your user base and how the model is being trained when adjusting these parameters.
  • Ensure you have the controls in place to isolate data sets for removal if a source has become compromised; consider unique identifiers that allow you to identify a source without providing unnecessary sensitive data.
  • Regularly back up data models so you can return to a previous version without heavy loss of data if the source becomes compromised.

AI security begins with design

Security must be a foundational aspect of AI development, not an afterthought. By identifying data fields upfront, conducting thorough AI threat modeling, implementing robust data management controls, and continuously tuning and maintaining models, organizations can build AI systems that are secure by design. 

This approach protects against potential threats and ensures that AI systems remain reliable, trustworthy, and compliant with regulatory requirements as they evolve alongside their user base.

Learn more

]]>
Announcing Upgraded Docker Plans: Simpler, More Value, Better Development and Productivity  https://www.docker.com/blog/november-2024-updated-plans-announcement/ Thu, 12 Sep 2024 13:09:59 +0000 https://www.docker.com/?p=58058 At Docker, our mission is to empower development teams by providing the tools they need to ship secure, high-quality apps — FAST. Over the past few years, we’ve continually added value for our customers, responding to the evolving needs of individual developers and organizations alike. Today, we’re excited to announce significant updates to our Docker subscription plans that will deliver even more value, flexibility, and power to your development workflows.

2400x1260 evergreen docker blog d

Docker accelerating the inner loop

We’ve listened closely to our community, and the message is clear: Developers want tools that meet their current needs and evolve with new capabilities to meet their future needs. 

That’s why we’ve revamped our plans to include access to ALL the tools our most successful customers are leveraging — Docker Desktop, Docker Hub, Docker Build Cloud, Docker Scout, and Testcontainers Cloud. Our new unified suite makes it easier for development teams to access everything they need under one subscription with included consumption for each new product and the ability to add more as they need it. This gives every paid user full access, including consumption-based options, allowing developers to scale resources as their needs evolve. Whether customers are individual developers, members of small teams, or work in large enterprises, the refreshed Docker Personal, Docker Pro, Docker Team, and Docker Business plans ensure developers have the right tools at their fingertips.

These changes increase access to Docker Hub across the board, bring more value into Docker Desktop, and grant access to the additional value and new capabilities we’ve delivered to development teams over the past few years. From Docker Scout’s advanced security and software supply chain insights to Docker Build Cloud’s productivity-generating cloud build capabilities, Docker provides developers with the tools to build, deploy, and verify applications faster and more efficiently.

Areas we’ve invested in during the past year include:

  • The world’s largest container registry. To date, Docker has invested more than $100 million in Docker Hub, which currently stores over 60 petabytes of data and handles billions of pulls each month. We have improved content discoverability, in-depth image analysis, image lifecycle management, and an even broader range of verified high-assurance content on Docker Hub. 
  • Improved insights. From Builds View to inspecting GitHub Actions builds to Build Checks to Scout health scores, we’re providing teams with more visibility into their usage and providing insights to improve their development outcomes. We have additional Docker Desktop insights coming later this year.
  • Securing the software supply chain. In October 2023, we launched Docker Scout, allowing developers to continuously address security issues before they hit production through policy evaluation and recommended remediations, and track the SBOM of their software. We later introduced new ways for developers to quickly assess image health and accelerate application security improvements across the software supply chain.
  • Container-based testing automation. In December 2023, we acquired AtomicJar, makers of Testcontainers, adding container-based testing automation to our portfolio. Testcontainers Cloud offers enterprise features and a scalable, cloud-based infrastructure that provides a consistent Testcontainers experience across the org and centralizes monitoring.
  • Powerful cloud-based builders. In January 2024, we launched Docker Build Cloud, combining powerful, native ARM & AMD cloud builders with shared cache that accelerates build times by up to 39x.
  • Security, control, and compliance for businesses. For our Docker Business subscribers, we’ve enhanced security and compliance features, ensuring that large teams can work securely and efficiently. Role-based access control (RBAC), SOC 2 Type 2 compliance, centralized management, and compliance reporting tools are just a few of the features that make Docker Business the best choice for enterprise-grade development environments. And soon, we are rolling out organizational access tokens to make developer access easier at the organizational level, enhancing security and efficiency.
  • Empowering developers to build AI applications. From introducing a new GenAI Stack to our extension for GitHub Copilot and our partnership with NVIDIA to our series of AI tips content, Docker is simplifying AI application development for our community. 

As we introduce new features and continue to provide — and improve on — the world’s largest container registry, the resources to do so also grow. With the rollout of our unified suites, we’re also updating our pricing to reflect the additional value. Here’s what’s changing at a high level: 

  • Docker Business pricing stays the same but gains the additional value and features announced today.
  • Docker Personal remains — and will always remain — free. This plan will continue to be improved upon as we work to grant access to a container-first approach to software development for all developers. 
  • Docker Pro will increase from $5/month to $9/month and Docker Team prices will increase from $9/user/month to $15/user/mo (annual discounts). Docker Business pricing remains the same.
  • We’re introducing image pull and storage limits for Docker Hub. This will impact less than 3% of accounts, the highest commercial consumers. For many of our Docker Team and Docker Business customers with Service Accounts, the new higher image pull limits will eliminate previously incurred fees.   
  • Docker Build Cloud minutes and Docker Scout analyzed repos are now included, providing enough minutes and repos to enhance the productivity of a development team throughout the day.  
  • Implementing consumption-based pricing for all integrated products, including Docker Hub, to provide flexibility and scalability beyond the plans.  

More value at every level

Our updated plans are packed with more features, higher usage limits, and simplified pricing, offering greater value at every tier. Our updated plans include: 

  • Docker Desktop: We’re expanding on Docker Desktop as the industry-leading container-first development solution with advanced security features, seamless cloud-native compatibility, and tools that accelerate development while supporting enterprise-grade administration.
  • Docker Hub: Docker subscriptions cover Hub essentials, such as private and public repo usage. To ensure that Docker Hub remains sustainable and continues to grow as the world’s largest container registry, we’re introducing consumption-based pricing for image pulls and storage. This update also includes enhanced usage monitoring tools, making it easier for customers to understand and manage usage.
View of the Usage dashboard
The Pulls Usage dashboard is now live on Docker Hub, allowing customers to see an organization’s Hub pull data.
  • Docker Build Cloud: We’ve removed the per-seat licenses for Build Cloud and increased the included build minutes for Pro, Team, and Business plans — enabling faster, more efficient builds across projects. Customers will have the option to add build minutes as their needs grow, but they will be surprised at how much time they save with our speedy builders. For customers using CI tools, Build Cloud’s speed can even help save on CI bills. 
  • Docker Scout: Docker Team and Docker Business plans will offer continuous vulnerability analysis for an unlimited number of Scout-enabled repositories. The integration of Docker Scout’s health scores into Docker Pro, Team, and Business plans helps customers maintain security and compliance with ease.
  • Testcontainers Cloud: Testcontainers Cloud helps customers streamline testing workflows, saving time and resources. We’ve removed the per-seat licenses for Testcontainers Cloud under the new plans and included cloud runtime minutes for Docker Pro, Docker Team, and Docker Business, available to use for Docker Desktop or in CI workflows. Customers will have the option to add runtime minutes as their needs grow.

Looking ahead

Docker continues to innovate and invest in our products, and Docker has been recognized most recently as developers’ most used, desired, and admired developer tool in the 2024 Stack Overflow Developer Survey.  

These updates are just the beginning of our ongoing commitment to providing developers with the best tools in the industry. As we continue to invest in our tools and technologies, development teams can expect even more enhancements that will empower them to achieve their development goals. 

New plans take effect starting November 15, 2024. The Docker Hub plan limits will take effect on Feb 1, 2025. No charges on Docker Hub image pulls or storage will be incurred between November 15, 2024, and January 31, 2025. For existing annual and month-to-month customers, these new plan entitlements will take effect at their next renewal date that occurs on or after November 15, 2024, giving them ample time to review and understand the new offerings. Learn more about the new Docker subscriptions and see a detailed breakdown of features in each plan. We’re committed to ensuring a smooth transition and are here to support customers every step of the way. 

Stay tuned for more updates or reach out to learn more. And as always, thank you for being a part of the Docker community. 


FAQ  

  1. I’m a Docker Business customer, what is new in my plan? 

Docker Business list pricing remains the same, but you will now have access to more of Docker’s products:  

  • Instead of paying an additional per-seat fee, Docker Build Cloud is now available to all users in your Docker plan. Learn how to use Build Cloud
  • Docker Build Cloud included minutes are increasing from 800/mo to 1500/mo. 
  • Docker Scout now includes unlimited repos with continuous vulnerability analysis, an increase from 3. Get started with Docker Scout quickstart
  • 1500 Testcontainers Cloud runtime minutes are now included for use either in Docker Desktop or for CI.
  • Docker Hub image pull rate limits have been removed.
  • 1M Docker Hub pulls per month are included. 

If you require additional Build Cloud minutes, Testcontainers Cloud runtime minutes, or Hub pulls or storage, you can add these to your plan with consumption-based pricing. See the pricing page for more details. 

  1. I’m a Docker Team customer, what is new in my plan? 

Docker Team will now include the following benefits:  

  • Instead of paying an additional per-seat fee, Docker Build Cloud is now available to all users in your Docker plan. Learn how to use Build Cloud
  • Docker Build Cloud minutes are increasing from 400/mo to 500/mo.
  • Docker Scout now includes unlimited repos with continuous vulnerability analysis, an increase from 3. Get started with Docker Scout quickstart
  • 500 Testcontainers Cloud runtime minutes are now included for use either in Docker Desktop or for CI.  
  • Docker Hub image pull rate limits will be removed.
  • 100K Docker Hub pulls per month are included.
  • The minimum number of users is 1 (lowered from 5)

Docker Team price will increase from $9/user/month (annual) to $15/user/mo (annual) and from $11/user/month (monthly) to $16/user/month (monthly). If you require additional Build Cloud minutes, Testcontainers Cloud runtime minutes, or Hub pulls or storage, you can add these to your plan with consumption-based pricing, or reach out to sales for invoice pricing. See the pricing page for more details. 

  1. I’m a Docker Pro customer, what is new in my plan? 

Docker Pro will now include: 

  • Docker Build Cloud minutes increased from 100/month to 200/month and no monthly fee. Learn how to use Build Cloud.
  • 2 included repos with continuous vulnerability analysis in Docker Scout. Get started with Docker Scout quickstart.  
  • 100 Testcontainers Cloud runtime minutes are now included for use either in Docker Desktop or for CI.
  • Docker Hub image pull rate limits will be removed. 
  • 25K Docker Hub pulls per month are included.

Docker Pro plans will increase from $5/month (annual) to $9/month (annual) and from $7/month (monthly) to $11/month (monthly). If you require additional Build Cloud minutes, Docker Scout repos, Testcontainers Cloud runtime minutes, or Hub pulls or storage, you can add these to your plan with consumption-based pricing. See the pricing page for more details. 

  1. I’m a Docker Personal user, what is included in my plan? 

Docker Personal plans remain free.

When you are logged into your account, you will see additional features and entitlements: 

  • 1 included repo with continuous vulnerability analysis in Docker Scout. Get started with Docker Scout quickstart.
  • Unlimited public Docker Hub repos. 
  • 1 private Docker Hub repo with 2GB storage. 
  • Updated Docker Hub image pull rate limit of 40 pulls/hr/user.

Unauthenticated users will be limited to 10 Docker Hub pulls/hr/IP address.  

Docker Personal users who want to start or continue using Docker Build Cloud may trial the service for seven days, or upgrade to a Docker Pro plan. Docker Personal users may trial Testcontainers Cloud for 30 days. 

  1. Where do I learn more about Docker Hub rate limits and storage changes? 

Check your plan’s details on the new plans overview page. For now, see the new Docker Hub Pulls Usage dashboard to understand your current usage.  

  1. When will new pricing go into effect? 

New pricing will go into effect on November 15, 2024, for all new customers. 

For all existing customers, new pricing will take effect on your next renewal date after November 15, 2024. When you renew, you will receive the benefits and entitlements of the new plans. Between now and your renewal date, your existing plan details will apply. 

  1. Can I keep my existing plan? 

If you are on an annual contract, you will keep your current plan and pricing until your next renewal date that falls after November 15, 2024. 

If you are a month-to-month customer, you may convert to an annual contract before November 14 to stay on your existing plan. You may choose between staying on your existing plan entitlements or the new comprehensive plans. After November 15, all month-to-month renewals will be on the new plans. 

  1. I have a regulatory constraint, is it possible to disable individual services? 

While most organizations will see reduced build times and improved supply chain security, some organizations may have constraints that prevent them from using all of Docker’s services. 

After November 15, the default configurations for Docker Desktop, Docker Hub, Docker Build Cloud, and Docker Scout are enabled for all users. The default configuration for Testcontainers Cloud is disabled. To change your organization’s configuration, the org owner or one of your org admins will be able to disable Docker Scout or Build Cloud in the admin console. 

  1. Can I get a refund on individual products I pay for today (Build Cloud, Scout repos, Testcontainers Cloud)? 

Your current plan will remain in effect until your first renewal date on or after November 15, 2024, for annual customers. At that time, your plan will automatically reflect your new entitlements for Docker Build Cloud and Docker Scout. If you are a current Testcontainers Cloud customer in addition to being a Docker Pro, Docker Team, or Docker Business customer, let your account manager know your org ID so that your included minutes can be applied starting November 15.  

  1.  How do I get more help? 

If you have additional questions not addressed in the FAQ, contact your Docker Account Executive or CSM.  

If you need help identifying those contacts or need technical assistance, contact support.

]]>
Getting Started with the Labs AI Tools for Devs Docker Desktop Extension https://www.docker.com/blog/labs-ai-tools-for-devs-docker-desktop-extension/ Mon, 09 Sep 2024 16:32:28 +0000 https://www.docker.com/?p=57572 This ongoing Docker Labs GenAI series explores the exciting space of AI developer tools. At Docker, we believe there is a vast scope to explore, openly and without the hype. We will share our explorations and collaborate with the developer community in real-time. Although developers have adopted autocomplete tooling like GitHub Copilot and use chat, there is significant potential for AI tools to assist with more specific tasks and interfaces throughout the entire software lifecycle. Therefore, our exploration will be broad. We will be releasing software as open source so you can play, explore, and hack with us, too.

We’ve released a simple way to run AI tools in Docker Desktop. With the Labs AI Tools for Devs Docker Desktop extension, people who want a simple way to run prompts can easily get started. 

If you’re a prompt author, this approach also allows you to build, run, and share your prompts more easily. Here’s how you can get started.

2400x1260 docker labs genai

Get the extension

You can download the extension from Docker Hub. Once it’s installed, enter an OpenAI key.

Import a project

With our approach, the information a prompt needs should be extractable from a project. Add projects here that you want to run SDLC tools inside (Figure 1).

Screenshot showing blue "Add project" button.
Figure 1: Add projects.

Inputting prompts

A prompt can be a git ref or a git URL, which will convert to a ref. You can also import your own local prompt files, which allows you to quickly iterate on building custom prompts.

Sample prompts

(copy + paste the ref)

ToolGit RefLinkDescription
Dockergithub.com:docker/labs-ai-tools-for-devs?ref=main&path=prompts/dockerhttps://github.com/docker/labs-ai-tools-for-devs/tree/main/prompts/dockerGenerates a runbook for any Docker project
Dockerfilesgithub.com:docker/labs-ai-tools-for-devs?ref=main&path=prompts/dockerfileshttps://github.com/docker/labs-ai-tools-for-devs/tree/main/prompts/dockerfilesGenerate multi-stage Dockerfiles for NPM projects
Lazy Dockergithub.com:docker/labs-ai-tools-for-devs?ref=main&path=prompts/lazy_dockerhttps://github.com/docker/labs-ai-tools-for-devs/tree/main/prompts/lazy_dockerGenerates a runbook for Lazy Docker
NPMgithub.com:docker/labs-ai-tools-for-devs?ref=main&path=prompts/npmhttps://github.com/docker/labs-ai-tools-for-devs/tree/main/prompts/npmResponds with helpful information about NPM projects
ESLintgithub.com:docker/labs-ai-tools-for-devs?ref=main&path=prompts/eslinthttps://github.com/docker/labs-ai-tools-for-devs/tree/main/prompts/eslintRuns ESLint in your project
ESLint Fixgithub.com:docker/labs-ai-tools-for-devs?ref=main&path=prompts/eslint_fixhttps://github.com/docker/labs-ai-tools-for-devs/tree/main/prompts/eslint_fixRuns ESLint in your project and responds with a fix for the first violation it finds
Pylintgithub.com:docker/labs-ai-tools-for-devs?ref=main&path=prompts/pylinthttps://github.com/docker/labs-ai-tools-for-devs/tree/main/prompts/pylintRuns Pylint in your project, and responds with a fix for the first violation it finds
Screenshot showing blue "Add local prompt" button next to text box in which to enter GitHub ref or URL
Figure 2: Enter a GitHub ref or URL.

Writing and testing your own prompt

Create a prompt file

A prompt file is a markdown file. Here’s an example: prompt.md

# prompt system
You are an assistant who can write comedic monologs in the style of Stephen Colbert.

# prompt user
Tell me about my project.

Now, we need to add information about the project. Doing so is done with mustache templates:

# prompt system
You are an assistant who can write comedic monologues in the style of Stephen Colbert.

# prompt user
Tell me about my project. 

My project uses the following languages:
{{project.languages}}

My project has the following files:
{{project.files}}

Leverage tools in your project

Just like extractors, which can be used to render prompts, we define tools in the form of Docker images. A function image follows the same spec as extractors but in reverse. 

  • The Docker image is automatically bind-mounted to the project.
  • The Docker image entry point is automatically run within the project using –workdir.
  • The first argument will be a JSON payload. This payload is generated when the LLM tries to call our function.
- name: write_files
  description: Write a set of files to my project
  parameters:
    type: object
    properties:
      	files:
        type: array
        items:
          type: object
          properties:
            path:
              type: string
              description: the relative path to the file that should be written
            content:
              type: string
              description: the content that should be written to a file
            executable:
              type: boolean
              description: whether to make the file executable
  container:
    image: vonwig/function_write_files:latest

Test your prompt

  1. Add the file to a Git repository and push to a public remote.
  2. Paste the URL to reference the file on GitHub.

Alternatively, import a local prompt and select the file on your computer.

Screenshot showing text box for entering the URL for the folder.
Figure 3: Add the URL for the folder.

3. Run.

## ROLE assistant

Don't even get me started on the files, I mean, have you ever sat down and really looked at a list of files? This project has got more layers than that seven-layer bean dip I had at last weekend's potluck. This project isn't just files on files, its files within files, its dot something after dot something else – and before you ask: Yes, all of these are REQUIRED!

Coming down to Dockerfile. Now, I've seen some Dockerfiles but our Dockerfile, folks, it's something else. It lifts, it grinds, it effectively orchestrates our code like a veteran conductor at the symphony. We also have multiple templates because who doesn't love a good template, right?

Oh, and did I mention the walkthroughs and the resources? Let's just say this isn't a "teach a man to fish" situation. This is more of a “teach a man to create an entire fishing corporation” scenario. Now THAT'S dedication.

Finally we've got the main.js, and let's be real, is it even a project without a main.js anymore?

As always, feel free to follow along in our new public repo. Everything we’ve discussed in this blog post is available for you to try out on your own projects.

For more on what we’re doing at Docker, subscribe to our newsletter.

Learn more

]]>
Why We Need More Gender Diversity in the Cybersecurity Space https://www.docker.com/blog/why-we-need-more-gender-diversity-in-the-cybersecurity-space/ Fri, 06 Sep 2024 14:21:18 +0000 https://www.docker.com/?p=57795 What does it mean to be diverse? At the root of diversity is the ability to bring people together with different perspectives, experiences, and ideas. It’s about enriching the work environment to lead to more innovative solutions, better decision-making, and a more inclusive environment.

For me, it’s about ensuring that my daughter one day knows that it really is okay for her to be whatever she wants to be in life. That she isn’t bound by a gender stereotype or what is deemed appropriate based on her sex.  

This is why building a more diverse workforce in technology is so critical. I want the children of the world, my children, to be able to see themselves in the people they admire, in the fields they are interested in, and to know that the world is accepting of the path that they choose.

Monday, August 26th, was Women’s Equality Day, and while I recognize that women have come a long way, there is still work to be done. Diversity is not just a buzzword — it’s a necessity. When diverse perspectives converge, they create a rich ground for innovation. 

Blue image with white checkmark in shield

Women in cybersecurity

Despite progress in many areas, women are still underrepresented in cybersecurity. Let’s look at key statistics. According to data published in the ISC2 Cybersecurity Workforce Study published in 2023:

  • Women make up 26% of the cybersecurity workforce globally. 
  • The average global salary of women who participated in the ISC2 survey was US$109,609 compared to $115,003 for men. For US women, the average salary was $141,066 compared to $148,035 for men. 

Making progress

We should recognize where we have had wins in cybersecurity diversity, too.

The 2024 Cybersecurity Skills Gap global research report highlights significant progress in improving diversity within the cybersecurity industry. According to the report, 83% of companies have set diversity hiring goals for the next few years, with a particular focus on increasing the representation of women and minority groups. Additionally, structured programs targeting women have remained a priority, with 73% of IT decision-makers implementing initiatives specifically aimed at recruiting more women into cybersecurity roles. These efforts suggest a growing commitment to enhancing diversity and inclusion within the field, which is essential for addressing the global cybersecurity skills shortage.

Women hold approximately 25% of the cybersecurity jobs globally, and that number is growing. This representation has seen a steady increase from about 10% in 2013 to 20% in 2019, and it’s projected to reach 30% by 2025, reflecting ongoing efforts to enhance gender diversity in this field. 

Big tech companies are playing a pivotal role in increasing the number of women in cybersecurity by launching large-scale initiatives aimed at closing the gender gap. Microsoft, for instance, has committed to placing 250,000 people into cybersecurity roles by 2025, with a specific focus on underrepresented groups, including women. Similarly, Google and IBM are investing billions into cybersecurity training programs that target women and other underrepresented groups, aiming to equip them with the necessary skills to succeed in the industry.

This progress is crucial as diverse teams are often better equipped to tackle complex cybersecurity challenges, bringing a broader range of perspectives and innovative solutions to the table. As organizations continue to emphasize diversity in hiring, the cybersecurity industry is likely to see improvements not only in workforce composition but also in the overall effectiveness of cybersecurity strategies.

Good for business

This imbalance is not just a social issue — it’s a business one. There are not enough cybersecurity professionals to join the workflow, resulting in a shortage. As of the ISC2’s 2022 report, there is a worldwide gap of 3.4 million cybersecurity professionals. In fact, most organizations feel at risk because they do not have enough cybersecurity staffing. 

Cybersecurity roles are also among the fastest growing roles in the United States. The Cybersecurity and Infrastructure Security Agency (CISA) introduced the Diverse Cybersecurity Workforce Act of 2024 to promote the cybersecurity field to underrepresented and disadvantaged communities. 

Here are a few ideas for how we can help accelerate gender diversity in cybersecurity:

  1. Mentorship and sponsorship: Experienced professionals should actively mentor and sponsor women in these fields, helping them navigate the challenges and seize opportunities.

    Unfortunately, this year the cybersecurity industry has seen major losses in organizations that support women. Women Who Code (WWC) and Girls in Tech shut their doors due to shortages in funds. Other organizations are still available, including:

Companies may also consider internal mentorship programs or working with partners to allow cross-company mentorship opportunities.

Women within the cybersecurity field should also consider guest lecture positions or even teaching. Young girls who do not get to see women in the field are statistically less likely to choose that as a profession.

  1. Inclusive work environments: Companies must create cultures where diversity is celebrated, not just tolerated or a means to an end. This means fostering environments where women feel empowered to share their ideas and take risks. This could include:
  • Provide training to employees at all levels. At Docker, every employee receives an annual training budget. Additionally, our Employee Resource Groups (ERGs) are provided with budgets to facilitate educational initiatives to support under-represented groups. Teams also can add additional training as part of the annual budgeting process.
  • Ensure there is an established career ladder for cybersecurity roles within the organization. Work with team members to understand their wishes for career advancement and create internal development plans to support those achievements. Make sure results are measurable. 
  • Provide transparency around promotions and pay, reducing the gender gaps in these areas. 
  • Ensure recruiters and managers are trained on diversity and identifying diverse candidate pools. At Docker, we invest in sourcing diverse candidates and ensuring our interview panels have a diverse team so candidates can learn about different perspectives regarding life at Docker.
  • Ensure diverse recruitment panels. This is important for recruiting new diverse talent and allows people to understand the culture from multiple perspectives.
  1. Policy changes: Companies should implement policies that support work-life balance, such as flexible working hours and parental leave, making it easier for women to thrive in these demanding fields. Companies could consider the following programs:
  • Generous paid parental leave.
  • Ramp-back programs for parents returning from parental leave.
  • Flexible working hours, remote working options, condensed workdays, etc. 
  • Manager training to ensure managers are being inclusive and can navigate diverse direct report needs.
  1. Employee Resource Groups (ERGs): Establishing allyship groups and/or employee resource groups (ERGs) help ensure that employees feel supported and have a mechanism to report needs to the organization. For example, a Caregivers ERG can help advocate for women who need flexibility in their schedule to allow for caregiving responsibilities. 

Better together

As we reflect on the progress made in gender diversity, especially in the cybersecurity industry, it’s clear that while we’ve come a long way, there is still much more to achieve. The underrepresentation of women in cybersecurity is not just a diversity issue — it’s a business imperative. Diverse teams bring unique perspectives that drive innovation, foster creativity, and enhance problem-solving capabilities. The ongoing efforts by companies, coupled with supportive policies and inclusive cultures, are critical steps toward closing the gender gap.

The cybersecurity landscape is evolving, and so must our approach to diversity. It’s encouraging to see big tech companies and organizations making strides in this direction, but the journey is far from over. As we commemorate Women’s Equality Day, let’s commit to not just acknowledging the need for diversity but actively working toward it. The future of cybersecurity — and the future of technology — depends on our ability to embrace and empower diverse voices.

Let’s make this a reality, not just for the sake of our daughters but for our entire industry.

Learn more

]]>
Join Docker CEO Scott Johnston at SwampUP 2024 in Austin https://www.docker.com/blog/swampup-2024-austin/ Fri, 06 Sep 2024 13:00:00 +0000 https://www.docker.com/?p=57833 We are excited to announce Docker’s participation in JFrog’s flagship event, SwampUP 2024, which will take place September 9 – 11, in Austin, Texas. In his SwampUP keynote talk, Docker CEO Scott Johnston will discuss how the Docker and JFrog collaboration boosts secure software and AI application development.

2400x1260 Jfrog Swampup

Keynote highlights

Johnston will discuss Docker’s approach to managing secure software supply chains by providing developer teams with trusted content, reducing and limiting exposure to malicious content in the early development stages. He will explore how Docker Desktop, Docker Hub, and Docker Scout play critical roles in ensuring that the building blocks developers rely on are deployed securely. By bringing security to the root of the software development lifecycle, highlighting vulnerabilities, and bringing trusted container images to the inner loop, Docker empowers development teams to safeguard their process, ensuring the delivery of higher quality, more secure applications, faster. 

Attendees will get insights into how Docker innovations, including Docker Business capabilities and Docker Hub benefits, are transforming software development. Johnston will walk through the practical benefits of integrating Docker’s products within JFrog’s ecosystem, showcasing real-world examples of how companies use these combined tools to streamline their development pipelines and accelerate delivering applications, many of which are powered by ML and AI. This combination enables a more comprehensive approach to managing software supply chains, ensuring that security is embedded throughout the development lifecycle.

Better together 

Docker and JFrog’s partnership is more than just a collaboration: It’s a commitment to providing developers with the tools and resources they need to build secure, efficient, and scalable applications. This connection between Docker’s expertise in container-first software development and JFrog’s comprehensive DevOps platform empowers development teams to manage their software supply chains with precision. By bringing together Docker’s trusted content and JFrog’s robust artifact management, developers can ensure their applications are built on a foundation of security and reliability.

Our mutual customers with Docker Business subscriptions can leverage features like Registry Access Management and Image Access Management to ensure developers only access verified registries and image repositories, such as specific instances of JFrog Artifactory or JFrog Container Registry.

Looking ahead, Docker and JFrog are committed to continuing their joint efforts in advancing secure software supply chain practices. Upcoming initiatives include expanding the availability of trusted content, enabling deeper integrations between Docker Scout and JFrog’s products, and introducing new features that will further enhance developer productivity and security. These developments will help organizations navigate the complexities of modern software development with greater confidence and control.

See you in Austin

As we prepare for SwampUP, we invite you to explore the integrations between Docker and JFrog that are already transforming development workflows. Whether you’re looking to manage your on-premise images with JFrog Artifactory or leverage Docker’s advanced security analytics and automated image management capabilities, this partnership offers resources to help developers successfully deploy cloud-native and hybrid applications with containerization best practices at their core.

Catch Scott Johnston’s keynote at SwampUP and learn more about how our partnership with JFrog can elevate your development processes. We’re excited to work together to build a more secure, efficient, and innovative software development ecosystem. See you in Austin!

Learn more

]]>
New Docker Desktop Enterprise Admin Features: MSI Installer and Login Enforcement Alternative https://www.docker.com/blog/docker-desktop-msi-installer-login-enforcement-alternatives/ Tue, 03 Sep 2024 14:02:47 +0000 https://www.docker.com/?p=56785 At Docker, we continuously strive to enhance the ease and security of our platform for all users. We’re excited to launch the general availability for two significant updates: the Docker Desktop MSI installer and a new sign-in enforcement alternative. These updates aim to streamline administration, improve security, and ensure users can take full advantage of Docker Business subscription features.

2400x1260 evergreen docker blog e

Docker Desktop MSI installer

Replacing an EXE installer with an MSI installer for desktop applications offers numerous advantages, particularly for enterprise customers:

  • Enhanced deployment features: MSI installers provide the ability to manage installations through Group Policy and offer more comprehensive installation and uninstallation control.
  • Easier and more secure mass deployment: Facilitates secure, efficient deployment across multiple devices, enhancing IT administration efficiency.
  • Widely accepted: MSI installers are recognized in both home and enterprise environments.
  • Supports standardized silent install parameters: Aligns with industry standards for silent installations.
  • Ideal for large-scale deployment: MSI files can be customized to include specific options, such as silent installs or custom installation paths, making them perfect for corporate environments.

For customers using Intune MDM, we have detailed documentation to assist with integration: Intune MDM Documentation.

To access the installer, navigate to the Docker Admin Console > Security and Access > Deploy Docker Desktop.

Sign-in enforcement: Streamlined alternative for organizations

Screenshot of sign-in enforcement window, saying "Sign in required! Please sign in to continue using Docker Desktop. You must be a member of the Docker organization." Blue button options are to Close Application or Sign in.
Figure 1: Sign-in enforcement.

Recognizing the need for more streamlined and familiar ways to enforce sign-in protocols, Docker is introducing a new sign-in enforcement mechanism for Windows OS (Figure 1). This update brings several business benefits, including increased user logins and better seat allocation awareness, ultimately helping customers maximize their business subscription features and manage license costs more effectively.

We now offer integration with the Windows Registry, allowing admins to add approved organizations directly within familiar Windows system settings. Find out more.

By moving away from the traditional registry.json method and adopting universally recognized settings, Docker simplifies the process for IT admins already familiar with these systems. This change means:

  • Easier integration: Organizations can seamlessly integrate Docker sign-in enforcement into their existing configuration management workflows.
  • Simplified administration: Reduces the learning curve and eliminates the need for additional internal approvals for new file types.

These changes are designed to offer quick deployment and familiar processes to IT administrators. We are committed to refining these mechanisms based on user feedback and evolving requirements. 

Note that the legacy registry.json method will continue to work, ensuring support for existing customers, but it should now be considered a legacy method. If you roll out a registry key, it will take precedence over any pre-existing registry.json.

Roll out the registry key sign-in enforcement at Docker install time via the --allowed-org flag. 

For example, to deploy the MSI installer with sign-in enforcement, run the following: 

msiexec /i "DockerDesktop.msi" /L*V ".\msi.log" /quiet /norestart ALLOWEDORG="docker.com"

The above command installs Docker Desktop silently with verbose logging, without restarting the system, and it allows only the specified organization (in this case, “docker.com”) to use Docker Desktop by enforcing sign-in.

Check our full step-by-step installation documentation.

Roadmap

We’re also working on several related administrative improvements, such as:

  • PKG enterprise installer for macOS.
  • macOS configuration profiles for enforcing sign-in.
  • Supporting multiple organizations in all available sign-in enforcement mechanism.

Stay tuned for these exciting updates!

Wrapping up

These updates reflect our ongoing commitment to improving the Docker platform for our users. By introducing the Docker Desktop MSI installer and new sign-in enforcement alternatives, we aim to simplify deployment, enhance security, and streamline administration for organizations of all sizes. We encourage IT teams and administrators to start planning for these changes to enhance their Docker experience.

Learn more

]]>
Docker Desktop 4.34: MSI Installer GA, Upgraded Host Networking, and Powerful Enhancements for Boosted Productivity & Administration https://www.docker.com/blog/docker-desktop-4-34/ Tue, 03 Sep 2024 14:00:48 +0000 https://www.docker.com/?p=57659 Key GA features of the Docker Desktop 4.34 release include: 

Docker Desktop 4.34 introduces key features to enhance security, scalability, and productivity for all development team sizes, making deploying and managing environments more straightforward. With the general availability (GA) of the MSI installer for bulk deployment, managing installations across Windows environments becomes even simpler. Enhanced authentication features offer an improved administration experience while reinforcing security. Automatically reclaim valuable disk space with Docker Desktop’s new smart compaction feature, streamlining storage management for WSL2 users. Additionally, the integration with NVIDIA AI Workbench provides developers with a seamless connection between model training and local development. Explore how these innovations simplify your workflows and foster a culture of innovation and reliability in your development practices.

2400x1260 4.34 rectangle docker desktop release

Deploy Docker Desktop in bulk with the MSI installer

We’re excited to announce that the MSI installer for Docker Desktop is now generally available to all our Docker Business customers. This powerful tool allows you to customize and deploy Docker Desktop across multiple users or machines in an enterprise environment, making it easier to manage Docker at scale. 

Features include:

  • Interactive and silent installations: Choose between an interactive setup process or deploy silently across your organization without interrupting your users.
  • Customizable installation paths: Tailor the installation location to fit your organization’s needs.
  • Desktop shortcuts and automatic startup: Simplify access for users with automatic creation of desktop shortcuts and Docker Desktop starting automatically after installation.
  • Set usage to specific Docker Hub organizations: Control which Docker Hub organizations your users are tied to during installation.

Docker administrators can download the MSI installer directly from the Docker Admin Console.

One of the standout features of this installer is the --allowed-org flag. This option enables the creation of a Windows registry key during installation, enforcing sign-in to a specified organization. By requiring sign-in, you ensure that your developers are using Docker Desktop with their corporate credentials, fully leveraging your Docker Business subscription. This also adds an extra layer of security, protecting your software supply chain.

Additionally, this feature paves the way for Docker to provide you with valuable usage insights across your organization and enable cloud-based control over application settings for every user in your organization in the future.

dd 434 f1
Figure 1: Docker admins can download the MSI installer directly from the Docker Admin Console.

What’s next

We’re also working on releasing a PKG enterprise installer for macOS, config profiles for macOS, and supporting multiple organizations in all supported sign-in enforcement mechanisms. 

Refer to our docs to learn about MSI configuration and discover more about sign-in enforcement via Windows registry key.

Host networking support to Docker Desktop 

Previously, Docker Desktop lacked seamless host networking capability, complicating the integration between host and container network services. Developers had to take time to set up and enable communication between the host and containers. Docker Desktop now supports host networking capability directly into Docker Desktop. 

Host networking allows containers that are started with --net=host to use localhost to connect to TCP and UDP services on the host. It will automatically allow software on the host to use localhost to connect to TCP and UDP services in the container. This simplifies the setup for scenarios in which close integration between host and container network services is required. Additionally, we’re driving cross-platform consistency and simplifying configuration by reducing the need for additional steps, such as setting up port forwarding or bridge networks. 

While this has previously been available in the Docker Engine, we’re now extending this capability to Docker Desktop for Windows, macOS, and Linux. We’re dedicated to improving developer productivity, and this is another way we help developers spend less time configuring network settings and more time building and testing applications, accelerating development cycles. 

This new capability is available for all users logged into Docker Desktop. To enable this feature, navigate to Settings > Resources > Network. Learn more about this feature on Docker Docs. 

dd 434 f2
Figure 2: Enable the host networking support feature in the Settings menu.

Automatic reclamation of disk space in Docker Desktop for WSL2 

Previously, when customers using Docker Desktop for WSL2 deleted Docker objects such as containers, images, or builds (for example via a docker system prune), the freed storage space was not automatically reclaimed on their host. Instead, they had to use external tools to “compact” the virtual disk/distribution backing Docker Desktop.

Starting with Docker 4.34, we are rolling out automatic reclamation of disk space. When you quit the app, Docker Desktop will automatically check whether there is storage space that can be returned to the host. It will then scan the virtual disk used for Docker storage, and compact it by returning all zeroed blocks to the operating system. Currently Docker Desktop will only start the scan when it estimates that at least 16GB of space can be returned. In the future, we plan to make this threshold adaptive and configurable by the user.

The feature is now enabled for all customers running the Mono distribution architecture for Docker Desktop on WSL2. This new architecture, which was rolled out starting with Docker Desktop 4.30 for all fresh installations of Docker Desktop, removed the need for a dedicated docker-desktop-data WSL2 distribution to store docker data. We will be rolling out the new architecture to all customers in the upcoming Docker Desktop releases.

Customers with installations still using the docker-desktop-data WSL2 distribution can compact storage manually via VHDX compaction tools, or change the WSL2 configuration to enable the experimental WSL2 feature for disk cleanup.

(Pro tip: Did you know you can use the Disk Usage extension to see how Docker Desktop is using your storage and use it to prune dangling objects with a single click?)

Authentication enhancements 

Previously, authenticating via the CLI required developers to either type their password into the command-line interface — which should generally be avoided by the security-minded — or manually create a personal access token (PAT) by navigating to their Docker account settings, generating the token, and then copying it into the CLI for authentication. This process was time-consuming and forced developers to switch contexts between the CLI and the web portal.

In this latest Docker Desktop release, we’re streamlining the CLI authentication flow. Now, users can authenticate through a seamless browser-based process, similar to the experience in CLIs like GitHub’s gh or Amazon’s AWS CLI. With this improved flow, typing docker login in the CLI will print a confirmation code and open your browser for authentication, automating PAT creation behind the scenes and eliminating the need for manual PAT provisioning. This enhancement saves time, reduces complexity, and delivers a smoother and more secure user experience. Additionally, when you authenticate using this workflow, you’ll be logged in across both Docker CLI and Docker Desktop. 

This new flow also supports developers in organizations that require single sign-on (SSO), ensuring a consistent and secure authentication process.

dd 434 f3 resized
Figure 3: When you log in via the new workflow, you’ll be logged in across both Docker CLI and Docker Desktop.

Enterprise-grade AI application development with Docker Desktop and NVIDIA AI Workbench  

AI development is a complex journey, often hindered by the challenge of connecting the dots between model training, local development, and deployment. Developers frequently encounter a fragmented and inconsistent development environment and toolchain, making it difficult to move seamlessly from training models in the cloud to running them locally. This fragmentation slows down innovation, introduces errors, and complicates the end-to-end development process.

To solve this, we’re proud to announce the integration of Docker Desktop with NVIDIA AI Workbench, a collaboration designed to streamline every stage of AI development. This solution brings together the power of Docker’s containerization with NVIDIA’s leading AI tools, providing a unified environment that bridges the gap between model training and local development.

With this integration, you can now train models in the cloud using NVIDIA’s robust toolkit and effortlessly transition to local development on Docker Desktop. This eliminates the friction of managing different environments and configurations, enabling a smoother, more efficient workflow from start to finish. 

To learn more about this collaboration and how Docker Business supports enterprise-grade AI application development, read our blog post. 

Multi-platform UX improvements and the containerd image store  

In February 2024, we announced the general availability of the containerd image store in Docker Desktop. Since then, we’ve been working on improving the output of our commands to make multi-platform images easier to view and manage. 

Now, we are happy to announce that the docker image list CLI command now supports an experimental --tree flag. This offers a completely new tree view of the image list, which is more suitable for describing multi-platform images.

dd 434 f4
Figure 4: New CLI tree view of the image list.

If you’re looking for multi-platform support, you need to ensure that you have the containerd image store enabled in Docker Desktop (see General settings in Docker Desktop, select Use containerd for pulling and storing images). As of the Docker Desktop 4.34 release, fresh installs or factory resets of Docker Desktop will now default to using the containerd image store, meaning that you get multi-platform building capability out of the box. 

dd 434 f5
Figure 5: You can enable the containerd image store in the Docker Desktop general settings.

To learn more about the containerd image store, check out our containerd documentation. 

Wrapping up 

Docker Desktop 4.34 marks a significant milestone in our commitment to providing an industry-leading container development suite. With key features such as the MSI installer for bulk deployment, enhanced authentication mechanisms, and the integration with NVIDIA AI Workbench, Docker Desktop is transforming how teams manage deployments, protect their environments, and accelerate their development workflows. 

These advancements simplify your development processes and help drive a culture of innovation and reliability. Stay tuned for more exciting updates and enhancements as we continue to deliver solutions designed to empower your development teams and secure your operations at scale. 

Upgrade to Docker Desktop 4.34 today and experience the future of container development. 

Learn more

]]>
Streamlining Local Development with Dev Containers and Testcontainers Cloud https://www.docker.com/blog/streamlining-local-development-with-dev-containers-and-testcontainers-cloud/ Tue, 27 Aug 2024 13:21:54 +0000 https://www.docker.com/?p=57323 In today’s world of fast-paced development, setting up a consistent and efficient development environment for all team members is crucial for overall productivity. Although Docker itself is a powerful tool that enhances developer efficiency, configuring a local development environment still can be complex and time-consuming. This is where development containers (or dev containers) come into play.

2400x1260 Testcontainers Cloud evergreen

Dev containers provide an all-encompassing solution, offering everything needed to start working on a feature and running the application seamlessly. In specific terms, dev containers are Docker containers (running locally or remotely) that encapsulate everything necessary for the software development of a given project, including integrated development environments (IDEs), specific software, tools, libraries, and preconfigured services. 

This description of an isolated environment can be easily transferred and launched on any computer or cloud infrastructure, allowing developers and teams to abstract away the specifics of their operating systems. The dev container settings are defined in a devcontainer.json file, which is located within a given project, ensuring consistency across different environments.

However, development is only one part of a developer’s workflow. Another critical aspect is testing to ensure that code changes work as expected and do not introduce new issues. If you use Testcontainers for integration testing or rely on Testcontainers-based services to run your application locally, you must have Docker available from within your dev container. 

In this post, we will show how you can run Testcontainers-based tests or services from within the dev container and how to leverage Testcontainers Cloud within a dev container securely and efficiently to make interacting with Docker even easier. 

Getting started with dev containers

To get started with dev containers on your computer using this tutorial, you will need:

  • Git 2.25+
  • Docker 
  • IntelliJ IDE 

There’s no need to preconfigure your project to support development containers; the IDE will do it for you. But, we will need some Testcontainers usage examples to run in the dev container, so let’s use the existing Java Local Development workshop repository. It contains the implementation of a Spring Boot-based microservice application for managing a catalog of products. The demo-state branch contains the implementation of Testcontainers-based integration tests and services for local development. 

Although this project typically requires Java 21 and Maven installed locally, we will instead use dev containers to preconfigure all necessary tools and dependencies within the development container. 

Setting up your first dev container

To begin, clone the project:

git clone https://github.com/testcontainers/java-local-development-workshop.git

Next, open the project in your local IntelliJ IDE and install the Dev Containers plugin (Figure 1).

Screenshot showing overview of Dev Containers plugin.
Figure 1: Install the Dev Containers plugin.

Next, we will add a .devcontainer/devcontainer.json  file with the requirements to the project. In the context menu of the project root, select New > .devcontainer (Figure 2).

Screenshot of project menu showing selection of "New" and ".devcontainer"
Figure 2: Choosing new .devcontainer.

We’ll need Java 21, so let’s use the Java Dev Container Template. Then, select Java version 21 and enable Install Maven (Figure 3).

Screenshot of dev container configuration page showing Dev Container Template options, with Install Maven selected.
Figure 3: Dev Container Template options.

Select OK, and you’ll see a newly generated devcontainer.json file. Let’s now tweak that a bit more. 

Because Testcontainers requires access to Docker, we need to provide some access to Docker inside of the dev container. Let’s use an existing Development Container Feature to do this. Features enhance development capabilities within your dev container by providing self-contained units of specific container configuration including installation steps, environment variables, and other settings. 

You can add the Docker-in-Docker feature to your devcontainer.json to install Docker into the dev container itself and thus have a Docker environment available for the Testcontainers tests.

Your devcontainer.json file should now look like the following:

{
	"name": "Java Dev Container TCC Demo",
	"image": "mcr.microsoft.com/devcontainers/java:1-21-bullseye",
	"features": {
		"ghcr.io/devcontainers/features/java:1": {
			"version": "none",
			"installMaven": "true",
			"installGradle": "false"
		},
		"docker-in-docker": {
  			"version": "latest",
  			"moby": true,
  			"dockerDashComposeVersion": "v1"
         }
	},
  "customizations" : {
    "jetbrains" : {
      "backend" : "IntelliJ"
    }
  }
}

Now you can run the container. Navigate to devcontainer.json and click on the Dev Containers plugin and select Create Dev Container and Clone Sources. The New Dev Container window will open (Figure 4).

Screenshot of New Dev Container window, with options for Docker, Git Repository, and Git Branch.
Figure 4: New Dev Container window.

In the New Dev Container window, you can select the Git branch and specify where to create your dev container. By default, it uses the local Docker instance, but you can select the ellipses () to add additional Docker servers from the cloud or WSL and configure the connection via SSH.

If the build process is successful, you will be able to select the desired IDE backend, which will be installed and launched within the container (Figure 5).

Screenshot of Building Dev Container window, showing container build progress and details. such as Container ID and Container name.
Figure 5: Dev container build process.

After you select Continue, a new IDE window will open, allowing you to code as usual. To view the details of the running dev container, execute docker ps in the terminal of your host (Figure 6).

Screenshot showing results of "docker ps" command, listing Container ID, Image, Status, Ports, etc.
Figure 6: Viewing dev container details.

If you run the TestApplication class, your application will start with all required dependencies managed by Testcontainers. (For more implementation details, refer to the “Local development environment with Testcontainers” step on GitHub.) You can see the services running in containers by executing docker ps in your IDE terminal (within the container). See Figure 7.

Screenshot showing more details of "docker ps" command, including services running in containers.
Figure 7: Viewing services running in containers.

Setting up Testcontainers Cloud in your dev container

To reduce the load on local resources and enhance the observability of Testcontainers-based containers, let’s switch from the Docker-in-Docker feature to the Testcontainers Cloud (TCC) feature:  ghcr.io/eddumelendez/test-devcontainer/tcc:0.0.2.

This feature will install and run the Testcontainers Cloud agent within the dev container, providing a remote Docker environment for your Testcontainers tests.

To enable this functionality, you’ll need to obtain a valid TC_CLOUD_TOKEN, which the Testcontainers Cloud agent will use to establish the connection. If you don’t already have a Testcontainers Cloud account, you can sign up for a free account. Once logged in, you can create a Service Account to generate the necessary token (Figure 8).

Screenshot of  Testcontainer "Create new Service Account" window, showing generation of Access Token.
Figure 8: Generating Testcontainers Cloud access token.

To use the token value, we’ll utilize an .env file. Create an environment file under .devcontainer/devcontainer.env and add your newly generated token value (Figure 9). Be sure to add devcontainer.env to .gitignore to prevent it from being pushed to the remote repository.

Screenshot of Project menu showing addition of token value to devcontainer.env.
Figure 9: Add your token value to devcontainer.env.

In your devcontainer.json file, include the following options:

  • The runArgs to specify that the container should use the .env file located at .devcontainer/devcontainer.env
  • The containerEnv to set the environment variables TC_CLOUD_TOKEN and TCC_PROJECT_KEY within the container. The TC_CLOUD_TOKEN variable is dynamically set from the local environment variable.

The resulting devcontainer.json file will look like this:

{
  "name": "Java Dev Container TCC Demo",
  "image": "mcr.microsoft.com/devcontainers/java:21",
  "runArgs": [
    "--env-file",
    ".devcontainer/devcontainer.env"
  ],
  "containerEnv": {
    "TC_CLOUD_TOKEN": "${localEnv:TC_CLOUD_TOKEN}",
    "TCC_PROJECT_KEY": "java-local-development-workshop"
  },
  "features": {
    "ghcr.io/devcontainers/features/java:1": {
      "version": "none",
      "installMaven": "true",
      "installGradle": "false"
    },
    "ghcr.io/eddumelendez/test-devcontainer/tcc:0.0.2": {}
  },
  "customizations": {
    "jetbrains": {
      "backend": "IntelliJ"
    }
  }
}

Let’s rebuild and start the dev container again. Navigate to devcontainer.json, select the Dev Containers plugin, then select Create Dev Container and Clone Sources, and follow the steps as in the previous example. Once the build process is finished, choose the necessary IDE backend, which will be installed and launched within the container.

To verify that the Testcontainers Cloud agent was successfully installed in your dev container, run the following in your dev container IDE terminal: 

cat /usr/local/share/tcc-agent.log

You should see a log line similar to Listening address= if the agent started successfully (Figure 10).

Screenshot of log output, including the "Listening address" line, to verify successful installation of Testcontainers Cloud agent.
Figure 10: Verifying successful installation of Testcontainers Cloud agent.

Now you can run your tests. The ProductControllerTest class contains Testcontainers-based integration tests for our application. (For more implementation details, refer to the “Let’s write tests” step on GitHub.)

To view the containers running during the test cycle, navigate to the Testcontainers Cloud dashboard and check the latest session (Figure 11). You will see the name of the Service Account you created earlier in the Account line, and the Project name will correspond to the TCC_PROJECT_KEY defined in the containerEnv section. You can learn more about how to tag your session by project or workflow in the documentation.

Screenshot of Testcontainers Cloud dashboard listing 4 containers and a green Connect button.
Figure 11: Testcontainers Cloud dashboard.

If you want to run the application and debug containers, you can Connect to the cloud VM terminal and access the containers via the CLI (Figure 12).

Screenshot of Testcontainers Cloud showing container access via the command line.
Figure 12: Accessing containers via the CLI.

Wrapping up

In this article, we’ve explored the benefits of using dev containers to streamline your Testcontainers-based local development environment. Using Testcontainers Cloud enhances this setup further by providing a secure, scalable solution for running Testcontainers-based containers by addressing potential security concerns and resource limitations of Docker-in-Docker approach. This powerful combination simplifies your workflow and boosts productivity and consistency across your projects.

Running your dev containers in the cloud can further reduce the load on local resources and improve performance. Stay tuned for upcoming innovations from Docker that will enhance this capability even further.

Learn more

]]>
Optimizing AI Application Development with Docker Desktop and NVIDIA AI Workbench https://www.docker.com/blog/optimizing-ai-application-development-docker-desktop-nvidia-ai-workbench/ Mon, 26 Aug 2024 16:00:54 +0000 https://www.docker.com/?p=57388 Are you looking to streamline how to incorporate LLMs into your applications? Would you prefer to do this using the products and services you’re already familiar with? This is where Docker Desktop, especially when paired with the advanced capabilities offered by Docker’s Business subscription tier, comes into play — particularly when combined with NVIDIA’s cutting-edge technology.

Imagine a development environment where setting up and managing AI workloads is as intuitive as the everyday tools you’re already using. With our deepening partnership with NVIDIA, we are committed to making this a reality. This collaboration not only enhances your ability to leverage Docker containers but also significantly improves your overall experience of building and developing AI applications.

What’s more, this partnership is designed to support your long-term growth and innovation goals. Docker Desktop with Docker Business, combined with NVIDIA software, provides the perfect launchpad for developers who want to accelerate their AI development journey — whether it’s building prototypes or deploying enterprise-grade AI applications. This isn’t just about providing tools; it’s about investing in your abilities, your career, and the innovation capabilities of your organization.

With Docker Business, you gain access to advanced capabilities that enhance security, streamline management, and offer unparalleled support. Meanwhile, NVIDIA AI Workbench provides a robust, containerized environment tailored for AI and machine learning projects. Together, these solutions empower you to push the boundaries of what’s possible, bringing AI into your applications more effortlessly and effectively.

docker nvidia 2400x1260 1

What is NVIDIA AI Workbench?

NVIDIA AI Workbench is a free developer toolkit powered by containers that enables data scientists and developers to create, collaborate, and migrate AI workloads and development environments across GPU systems. It targets scenarios like model fine-tuning, data science workflows, retrieval-augmented generation, and more. Users can install it on multiple systems but drive everything from a client application that runs locally on Windows, Ubuntu, and macOS. NVIDIA AI Workbench helps enable collaboration and distribution through Git-based platforms, like GitHub and GitLab. 

How does Docker Desktop relate to NVIDIA AI Workbench?

NVIDIA AI Workbench requires a container runtime. Docker’s container runtime (Docker Engine), delivered through Docker Desktop, is the recommended AI Workbench runtime for developers using AI Workbench on Windows and macOS. Previously, AI Workbench users had to install Docker Desktop manually. With this newest release of AI Workbench, developers who select Docker as their container runtime will have Docker Desktop installed on their machine automatically, with no manual steps required.

 You can learn about this integration in NVIDIA’s technical blog.

Moving beyond the AI application prototype

Docker Desktop is more than just a tool for application development; it’s a launchpad that provides an integrated, easy-to-use environment for developing a wide range of applications, including AI. What makes Docker Desktop particularly powerful is its ability to seamlessly create and manage containerized environments, ensuring that developers can focus on innovation without worrying about the underlying infrastructure.

For developers who have already invested in Docker, this means that the skills, automation, infrastructure, and tooling they’ve built up over the years for other workloads are directly applicable to AI workloads as well. This cross-compatibility offers a huge return on investment, as it allows teams to extend their existing Docker-based workflows to include AI applications and services without needing to overhaul their processes or learn new tools.

Docker Desktop’s compatibility with Windows, macOS, and Linux makes it an ideal choice for diverse development teams. Its robust features support a wide range of development workflows, from initial prototyping to large-scale deployment, ensuring that as AI applications move from concept to production, developers can leverage their existing Docker infrastructure and expertise to accelerate and scale their work.

For those looking to create high-quality, enterprise-grade AI applications, Docker Desktop with Docker Business offers advanced capabilities. These include enhanced security, management, and support features that are crucial for enterprise and advanced development environments. With Docker Business, development teams can build securely, collaborate efficiently, and maintain compliance, all while continuing to utilize their existing Docker ecosystem. By leveraging Docker Business, developers can confidently accelerate their workflows and deliver innovative AI solutions with the same reliability and efficiency they’ve come to expect from Docker.

Accelerating developer innovation with NVIDIA GPUs

In the rapidly evolving landscape of AI development, the ability to leverage GPU capabilities is crucial for handling the intensive computations required for tasks like model training and inference. Docker is working to offer flexible solutions to cater to different developers, whether you have your own GPUs or need to leverage cloud-based compute. 

Running containers with NVIDIA GPUs through Docker Desktop 

GPUs are at the heart of AI development, and Docker Desktop is optimized to leverage NVIDIA GPUs effectively. With Docker Desktop 4.29 or later, developers can configure CDI support in the daemon and easily make all NVIDIA GPUs available in a running container by using the --device option via support for CDI devices.

For instance, the following command can be used to make all NVIDIA GPUs available in a container:

docker run --device nvidia.com/gpu=all <image> <command>

For more information on how Docker Desktop supports NVIDIA GPUs, refer to our GPU documentation.

No GPUs? No problem with Testcontainers Cloud

Not all developers have local access to powerful GPU hardware. To bridge this gap, we’re exploring GPU support in Testcontainers Cloud. This will allow developers to access GPU resources in a cloud environment, enabling them to run their tests and validate AI models without needing physical GPUs. With Testcontainers Cloud, you will be able to harness the power of GPUs from anywhere, democratizing high-performance AI development.

Trusted AI/ML content on Docker Hub

Docker Desktop provides a reliable and efficient platform for developers to discover and experiment with new ideas and approaches in AI development. Through its trusted content program, Docker selects and curates with open source and commercial communities high-quality images and distributes them on Docker Hub, under Docker Official Images, Docker Sponsored Open Source, and Docker Verified Publishers. With a wealth of AI/ML content, Docker makes it easy for users to discover and pull images for quick experimentation. This includes various images, such as NVIDIA software offerings and many more, allowing developers to get started quickly and efficiently.

Accelerated builds with Docker Build Cloud

Docker Build Cloud is a fully managed service designed to streamline and accelerate the building, testing, and deployment of any application. By leveraging Docker Build Cloud, AI application developers can shift builds from local machines to remote BuildKit instances — resulting in up to 39x faster builds. By offloading the complex build process to Docker Build Cloud, AI development teams can focus on refining their models and algorithms while Docker handles the rest.

Docker Business users can experience faster, more efficient builds and reproducible AI deployments with Docker Build Cloud minutes as part of their subscription.

Ensuring quality with Testcontainers

As AI applications evolve from prototypes to production-ready solutions, ensuring their reliability and performance becomes critical. This is where testing frameworks like Testcontainers come into play. Testcontainers allows developers to test their applications using real containerized dependencies, making it easier to validate application logic that utilize AI models in self-contained, idempotent, reproducible ways. 

For instance, developers working with LLMs can create Testcontainers-based tests that will test their application by utilizing any model available on Hugging Face utilizing the recently released Ollama container.  

Wrap up

The collaboration between Docker and NVIDIA marks a significant step forward in the AI development landscape. By integrating Docker Desktop into NVIDIA AI Workbench, we are making it easier than ever for developers to build, ship, and run AI applications. Docker Desktop provides a robust, streamlined environment that supports a wide range of development workflows, from initial prototyping to large-scale deployment. 

With advanced capabilities from Docker Business, AI developers can focus on innovation and efficiency. As we deepen our partnership with NVIDIA, we look forward to bringing even more enhancements to the AI development community, empowering developers to push the boundaries of what’s possible in AI and machine learning. 

Stay tuned for more exciting updates as we work to revolutionize AI application development.

Learn more

]]>