AI Archives - SD Times https://sdtimes.com/tag/ai/ Software Development News Tue, 28 Jan 2025 14:59:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg AI Archives - SD Times https://sdtimes.com/tag/ai/ 32 32 Survey: Organizations looking to AI to enhance value stream efforts https://sdtimes.com/vsm/survey-organizations-looking-to-ai-to-enhance-value-stream-efforts/ Tue, 28 Jan 2025 14:55:46 +0000 https://sdtimes.com/?p=56395 Organizations are looking to artificial intelligence to enhance their value stream initiatives, according to the fourth annual Value Stream Management survey sponsored by Broadcom and conducted by Dimension Research. According to 90% of respondents, AI can help in the advancement of value stream management, in the areas of improving predictive analytics, automating workflows and processes, … continue reading

The post Survey: Organizations looking to AI to enhance value stream efforts appeared first on SD Times.

]]>
Organizations are looking to artificial intelligence to enhance their value stream initiatives, according to the fourth annual Value Stream Management survey sponsored by Broadcom and conducted by Dimension Research.

According to 90% of respondents, AI can help in the advancement of value stream management, in the areas of improving predictive analytics, automating workflows and processes, and improving product quality. 

“Some of the things I thought were kind of fascinating [in the survey] were, how much more percentage of AI is going to have an expectation in here? And where is that going to go?” asked Lance Knight, chief value stream architect at Broadcom. 

Broadcom’s survey found that the top focus for 2025 is the customer life cycle, specifically attracting new customers and delivering more customer value. When asked why their companies are adopting VSM, participants said it was specifically to deliver more customer value. Those already using VSM stated that this initiative has delivered improved data flow and decision-making. Those VSM benefits are directly correlated with the top business goals for 2025.     

In its Value Stream Solutions Landscape for Q1 2025, Forrester also notes how AI is changing what can be derived from value stream management solutions. It is characterizing the landscape as “before GenAI” and “After GenAI,” with the latter leading organizations to use AI to measure any productivity gains from implementing AI.

Related: Software engineering intelligence may have its breakout year in 2025

Broadcom last quarter introduced Vaia for Value Stream Management, an AI assistant within the company’s ValueOps platform that the company said can magnify the core benefits of VSM, such as increased visibility, alignment and efficiency, which can enable more effective digital transformation.

“A VSM automation platform is going to be a big part of that, with agentic AI and the things that are going on now,” Knight said. “What solution is best set up to be able to communicate with [multiple] tools and help AI? That’s the VSM automation platform.” 

It’s in the tool integrations through ConnectALL that brings VSM to another level. “[Let’s say] I’m doing vulnerability scanning. I find a vulnerability, ConnectALL sees that, and can send it to a specifically set-up AI agent that’ll research it, come back and say this is the problem, and maybe even turn off a server,” he explained. “But, at a minimum, put the repair and the recommended repair into the system right away, for a fulfiller or the person working on it to go and look at that. And that’s just the start of this.”

Meanwhile, challenges still remain for organizations looking to implement value stream management. The survey found that low team participation and a lack of visibility are impediments to successful VSM adoption.

Forrester analyst Chris Condo said, “the primary challenge to VSM adoption has been the lack of a compelling use case that all development leaders could support. Until recently, measuring DORA metrics, process improvements, or even developer experience was not a priority for many development leaders.

 However,” he continued, “the advent of genAI, combined with the push for greater efficiency and productivity measurement, has shifted this perspective. Last year, the most frequent question I received from Forrester clients was, “How do I measure the productivity gains from using a copilot?”  Many had already implemented copilot without establishing a performance baseline and were now being asked by senior leadership to demonstrate the gains from this adoption. This has led to a significant rise in interest in tools that can measure productivity.”

Among the other key findings in Broadcom’s VSM survey are: 

* Every company responding to the survey, and that has begun a digital transformation, indicated they use or plan to use VSM, with year-over-year data showing VSM delivers increased value over time and improved data-driven decision-making.

* VSM is maturing, as indicated by the 30% of respondents who said their companies will use multiple product lines and 11% reporting enterprise-wide use of VSM. The growing maturity is also reflected in an increase of use of commercial VSM tools, moving away from homegrown tools and spreadsheets.

 

The post Survey: Organizations looking to AI to enhance value stream efforts appeared first on SD Times.

]]>
JetBrains releases AI coding agent Junie https://sdtimes.com/ai/jetbrains-releases-ai-coding-agent-junie/ Thu, 23 Jan 2025 16:41:53 +0000 https://sdtimes.com/?p=56378 JetBrains has announced the launch of its new AI coding agent, Junie. Junie runs within JetBrains IDEs and can take on simple coding tasks or assist on more complex tasks with collaboration from the developer. “Thanks to the power of JetBrains IDEs, coupled with reliable LLMs, Junie already solves tasks that would otherwise require hours … continue reading

The post JetBrains releases AI coding agent Junie appeared first on SD Times.

]]>
JetBrains has announced the launch of its new AI coding agent, Junie. Junie runs within JetBrains IDEs and can take on simple coding tasks or assist on more complex tasks with collaboration from the developer.

“Thanks to the power of JetBrains IDEs, coupled with reliable LLMs, Junie already solves tasks that would otherwise require hours of work,” Andrew Zakonov, product leader at JetBrains, wrote in a blog post

Developers can share prompts with the agent, review the results, and adjust as needed. Over time, it learns the context of code and the developer’s preferences and style. “This results in better code quality and control on how Junie performs tasks, ensuring reliability, making Junie a trusted collaborator on your team,” Zakonov wrote. 

RELATED: AI agents are transforming the software development life cycle

It can also run code and tests, and can check the project state after making changes to verify that all tests have passed. “AI-generated code can be just as flawed as developer-written code. Ultimately, Junie will not just speed up development – it is poised to raise the bar for code quality,” said Zakonov.

Junie was able to solve 53.6% of tasks on a single run according to the SWEBench Verified benchmark, which includes 500 developer tasks. According to JetBrains, this proves that’s capable of adapting to the needs of today’s developers.

According to JetBrains, Junie does not run locally, as it uses OpenAI and Anthropic models. There is currently a waitlist for the Early Access Program, and it is initially available in IntelliJ IDEA Ultimate and PyCharm Professional, with support for WebStorm coming next. It is only available on OS X and Linux at the moment. 

The post JetBrains releases AI coding agent Junie appeared first on SD Times.

]]>
Postman launches new platform that lets developers build AI agents https://sdtimes.com/api/postman-launches-new-platform-that-lets-developers-build-ai-agents/ Wed, 22 Jan 2025 16:57:05 +0000 https://sdtimes.com/?p=56375 Postman is helping make it easier for developers to design, test, and deploy AI agents with the launch of its AI Agent Builder tool.  According to the company, the rise of AI agents represents a shift in how software systems are being built and run. “As agents gain traction, we could see a 10X–100X increase … continue reading

The post Postman launches new platform that lets developers build AI agents appeared first on SD Times.

]]>
Postman is helping make it easier for developers to design, test, and deploy AI agents with the launch of its AI Agent Builder tool. 

According to the company, the rise of AI agents represents a shift in how software systems are being built and run. “As agents gain traction, we could see a 10X–100X increase in API utility, enabling software systems to execute increasingly complex workflows. Today humans remain ‘in the loop’, but this will evolve where humans step out entirely depending on trust, and risk factors,” Abhinav Asthana, co-founder and CEO of Postman, wrote in a blog post in December. 

RELATED: AI agents are transforming the software development life cycle

Postman’s AI Agent Builder provides a centralized platform for discovering LLMs and APIs. Developers can compare responses, cost, and performance of a variety of LLMs, including OpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude, Cohere, and Meta’s Llama.

The platform provides access to APIs from all of Postman’s verified publishers, like Salesforce, PayPal, and UPS. This will help ensure that agents are built upon accurate and reliable tools, Postman explained.

Additionally, developers will be able to leverage the company’s no-code canvas, Postman Flows, to set up agents and multi-step workflows.

Other features currently on the roadmap for the AI Agent Builder tool include the ability to deploy agents and workflows to the cloud and real-time monitoring of deployed Flows.

“The rise of agentic AI marks a pivotal shift in how software systems will be built and operated. This wave will highlight a truth that many engineering leaders are just beginning to realize: the power to deliver AI solutions lies in their APIs. At Postman, we’re excited about this future and remain focused on helping developers and customers thrive in an API-first world,” Asthana wrote.

The post Postman launches new platform that lets developers build AI agents appeared first on SD Times.

]]>
Containers in 2025: Bridging the gap between software and hardware https://sdtimes.com/softwaredev/containers-in-2025-bridging-the-gap-between-software-and-hardware/ Fri, 17 Jan 2025 18:16:37 +0000 https://sdtimes.com/?p=56367 Containers have long been a popular way of packaging up and delivering software, but many developers have also begun to explore using containers in more ways than originally intended.   In a recent episode of the SD Times podcast, What the Dev, Scott McCarty, senior principal product manager for Red Hat Enterprise Linux, sat down with … continue reading

The post Containers in 2025: Bridging the gap between software and hardware appeared first on SD Times.

]]>
Containers have long been a popular way of packaging up and delivering software, but many developers have also begun to explore using containers in more ways than originally intended.  

In a recent episode of the SD Times podcast, What the Dev, Scott McCarty, senior principal product manager for Red Hat Enterprise Linux, sat down with us to discuss the trends he’s been seeing and also make predictions for what’s to come. 

For example, he’s seen that developers are now using containers for cross-platform purposes, such as enabling x86 code to run on an Arm processor. 

According to McCarty, cross-platform development is normally fairly complicated because you’re not only having to develop for different systems and architecture, but your CI/CD system also needs to be on that hardware platform, or at least be able to simulate it. 

He explained that a developer that mostly works in the x86 world who is trying to develop for an Arm or RISC-V processor will need to “have some kind of simulation and or real piece of hardware that you can develop locally, put into the CICD system and test in some gold capacity or production capacity locally.” That is hard to do, so the question is can containers help with that problem?

“I’ve been through enough of these life cycles of technology that you see that almost always, if something’s very useful, we’ll bend it to our will to make it do all kinds of things it wasn’t designed to do,” he said. 

New technologies like bootc, which stands for bootable containers, are also coming into play to expand what containers can do. Essentially, bootc lets entire operating systems exist within a single container. 

“The container image has a kernel in it, but when you deploy it in production, it’s actually just a regular virtual machine, you know, or physical machine. It sort of takes the container image, converts it into a disk image, lays it down on disk and runs it. It is not a container at runtime,” McCarty said. 

He explained that once you have a bootc image running on a virtual machine, only a single command is needed to change the behavior of that virtual machine. 

“Just as easy as you could change the personality of the application you were running with Docker or Podman … it’s actually a single bootc command to basically change the personality of a physical or virtual machine … and you have a totally different server. So you can go from Fedora 39 to RHEL 10 to Debian, whatever. You can literally just change the personality. So it gives you a flexibility with pre-deployed servers that I think we have never seen before.”

McCarty also talked about how AI and ML technologies are being integrated with container technologies. He explained that in the case of artificial general intelligence (AGI), where AI is this super genius, better than any human, then AI would no longer be just software. However, for today, AI is still software, which means it’s going to need to be packaged up somehow. 

“If it’s just software, then containers are really convenient for software,” he said. “And so we know a bunch of things about it, right? Like it’s files when it’s not running, it’s processes when it is running. And the same mechanisms that we use to control files and processes, AKA containers, become very useful to AI.”

With no understood path to AGI today, McCarty believes AI should be treated as software and put in containers.

McCarty also predicts that local development of AI will become popular, citing NVIDIA’s Project DIGITS announcements as proof. NVIDIA calls Project DIGITS an “AI Supercomputer on your desk,” and McCarty said it’s essentially the equivalent of a Mac Mini with a GPU unit. 

“I think Apple’s doing a good job with their M Series processors, and actually Podman Desktop’s doing a good job of doing pass through of GPU acceleration in containers on Mac. I’d say those are all places we see as pretty exciting technologies and enablements for developers, where we see people doing AI development in containers on a laptop or desktop, and then having local acceleration. I think that combination and permutation of technologies is pretty hot. I think people want that badly. In fact, I want that.”

The post Containers in 2025: Bridging the gap between software and hardware appeared first on SD Times.

]]>
Biden signs Executive Order for building out AI infrastructure https://sdtimes.com/ai/biden-signs-executive-order-for-building-out-ai-infrastructure/ Tue, 14 Jan 2025 17:13:49 +0000 https://sdtimes.com/?p=56349 President Biden today signed an Executive Order to facilitate building the infrastructure needed for AI. Its goal is to enable the country to set up the necessary infrastructure for AI while balancing environmental concerns.  “We will not let America be out-built when it comes to the technology that will define the future, nor should we … continue reading

The post Biden signs Executive Order for building out AI infrastructure appeared first on SD Times.

]]>
President Biden today signed an Executive Order to facilitate building the infrastructure needed for AI. Its goal is to enable the country to set up the necessary infrastructure for AI while balancing environmental concerns. 

“We will not let America be out-built when it comes to the technology that will define the future, nor should we sacrifice critical environmental standards and our shared efforts to protect clean air and clean water,” the White House wrote in a press release

The Executive Order directs agencies such as the Department of Defense (DOD) and Department of Energy (DOE) to make federal sites available for use by the private sector to build AI data centers and new clean power facilities. 

RELATED CONTENT: Biden administration sets new rules for exporting AI chips

According to the Biden Administration, building AI infrastructure within the country is crucial for ensuring national security, as it prevents foreign adversaries from accessing these AI systems, which could have a negative impact on military and national security. It also allows the U.S. to avoid being dependent on other countries for AI tools.

The Administration also believes it is important for economic competitiveness, since AI will have large impacts across health care, transportation, education, and more. 

Scaling up AI will place additional demands on the energy grid, so the other main focus of the Executive Order is meeting those demands using clean energy technologies. 

Builders who are selected to lease DOD and DOE sites to create new AI data centers will be required to bring clean energy generation centers online to support these new facilities. 

The DOD will perform environmental analyses immediately and the agencies will also work on identifying opportunities to expedite permitting, such as establishing “categorical exclusions” when the environment will not be significantly impacted. The DOE will also coordinate with site developers on constructing, financing, facilitating, and planning upgrades of transmission lines at the sites. 

Additionally, the site developers will be required to ensure that the cost of building and operating infrastructure doesn’t raise electricity costs for consumers. 

“Today’s Executive Order enables an AI infrastructure buildout that protects national security, enhances competitiveness, powers AI with clean energy, enhances AI safety, keeps prices low for consumers, demonstrates responsible ways to scale new technologies, and promotes a competitive AI ecosystem,” the White House wrote.

The post Biden signs Executive Order for building out AI infrastructure appeared first on SD Times.

]]>
Biden administration sets new rules for exporting AI chips https://sdtimes.com/ai/biden-administration-sets-new-rules-for-exporting-ai-chips/ Mon, 13 Jan 2025 21:51:18 +0000 https://sdtimes.com/?p=56345 The Biden administration today announced new rules regarding exportation of AI chips to 120 countries, according to reports from the Associated Press (AP).  The NY Times clarified that the framework divides countries into three categories: The U.S. and its 18 closest allies (including  Britain, Canada, Germany, Japan, South Korea and Taiwan), countries already under a … continue reading

The post Biden administration sets new rules for exporting AI chips appeared first on SD Times.

]]>
The Biden administration today announced new rules regarding exportation of AI chips to 120 countries, according to reports from the Associated Press (AP)

The NY Times clarified that the framework divides countries into three categories: The U.S. and its 18 closest allies (including  Britain, Canada, Germany, Japan, South Korea and Taiwan), countries already under a U.S. arms embargoes (like China and Russia) that will continue to face an existing ban on AI chip purchases, and all other countries, which would be subject to these new rules. 

According to AP, under the new rules, the restricted countries would be able to purchase up to 50,000 GPUs. Government-to-government deals could increase the limit to 100,000 if a country’s renewable energy and security goals align with the U.S., and organizations in specific countries could also potentially apply for a status that enables them to buy up to 320,000 GPUs over two years.

Additionally, chip orders equivalent to 1,700 advanced GPUs wouldn’t count against the limit, which is likely designed to help universities and medical institutions meet their needs, AP speculated. 

NVIDIA responded to this news with a blog post stating that these rules would put global progress at risk. 

“The new rules would control technology worldwide, including technology that is already widely available in mainstream gaming PCs and consumer hardware. Rather than mitigate any threat, the new Biden rules would only weaken America’s global competitiveness, undermining the innovation that has kept the U.S. ahead,” the company wrote. 

The rules do include a 120-day comment period, which means that President-elect Donald Trump will be the one to actually decide the rules for selling chips overseas. 

The post Biden administration sets new rules for exporting AI chips appeared first on SD Times.

]]>
Report: AI and security governance remain top priorities for 2025 https://sdtimes.com/ai/report-ai-and-security-governance-remain-top-priorities-for-2025/ Mon, 06 Jan 2025 19:43:53 +0000 https://sdtimes.com/?p=56314 Companies are planning to invest more heavily in AI skills and security governance, risk, and compliance initiatives this upcoming year, according to new research from O’Reilly. The company’s Technology Trends for 2025 report analyzed data from 2.8 million users on its learning platform. The research shows significant increases in interest in various AI skills, including … continue reading

The post Report: AI and security governance remain top priorities for 2025 appeared first on SD Times.

]]>
Companies are planning to invest more heavily in AI skills and security governance, risk, and compliance initiatives this upcoming year, according to new research from O’Reilly.

The company’s Technology Trends for 2025 report analyzed data from 2.8 million users on its learning platform.

The research shows significant increases in interest in various AI skills, including prompt engineering with a 456% increase, AI principles with a 386% increase, and generative AI with a 289% increase. O’Reilly also noted that there was a 471% increase in interest in content about GitHub Copilot.

Some AI topics experienced a decrease in interest, however. GPT saw a 13% drop in usage and a downward trend in searches. According to O’Reilly, this may indicate that “developers are prioritizing foundational AI knowledge over platform-specific skills to effectively navigate across various AI models such as Claude, Google’s Gemini, and Llama.”

Security also saw increased interest, with interest in governance, risk, and compliance rising by 44%. Content related to application security increased by 17% and zero trust rose by 13%. The company believes that these trends show that companies are focusing on having more comprehensive security strategies.

Other findings of the report include a 29% increase in data engineering skills, a decline in Python and Java interest, and a plateau in interest in cloud computing.

“This year marks a pivotal transition in technology, with AI evolving from generative capabilities to a transformative force reshaping how developers approach their craft,” said Mike Loukides, vice president of emerging technology content at O’Reilly and the report’s author. “As foundational skills gain prominence and organizations increasingly adopt comprehensive security practices, professionals must prioritize upskilling to effectively integrate these tools into their operations. The future is not about fearing AI’s impact on jobs but in harnessing its potential to enhance productivity and drive innovation across industries.”

The post Report: AI and security governance remain top priorities for 2025 appeared first on SD Times.

]]>
Podcast: The negative long-term impacts of AI on software development pipelines https://sdtimes.com/ai/podcast-the-negative-long-term-impacts-of-ai-on-software-development-pipelines/ Thu, 02 Jan 2025 16:47:25 +0000 https://sdtimes.com/?p=56306 AI has the potential to speed up the software development process, but is it possible that it’s adding additional time to the process when it comes to the long-term maintenance of that code?  In a recent episode of the podcast, What the Dev?, we spoke with Tanner Burson, vice president of engineering at Prismatic, to … continue reading

The post Podcast: The negative long-term impacts of AI on software development pipelines appeared first on SD Times.

]]>
AI has the potential to speed up the software development process, but is it possible that it’s adding additional time to the process when it comes to the long-term maintenance of that code? 

In a recent episode of the podcast, What the Dev?, we spoke with Tanner Burson, vice president of engineering at Prismatic, to get his thoughts on the matter.

Here is an edited and abridged version of that conversation:

You had written that 2025, is going to be the year organizations grapple with maintaining and expanding their AI co-created systems, exposing the limits of their understanding and the gap between development ease and long term sustainability. The notion of AI possibly destabilizing the modern development pipeline caught my eye. Can you dive into that a little bit and explain what you mean by that and what developers should be wary of?

I don’t think it’s any secret or surprise that generative AI and LLMs have changed the way a lot of people are approaching software development and how they’re looking at opportunities to expand what they’re doing. We’ve seen everybody from Google saying recently that 25% of their code is now being written by or run through some sort of in-house AI, and I believe it was the CEO of AWS who was talking about the complete removal of engineers within a decade. 

So there’s certainly a lot of people talking about the extreme ends of what AI is going to be able to do and how it’s going to be able to change the process. And I think people are adopting it very quickly, very rapidly, without necessarily putting all of the thought into the long term impact on their company and their codebase. 

My expectation is that this year is the year we start to really see how companies behave when they do have a lot of code they don’t understand anymore. They have code they don’t know how to debug properly. They have code that may not be as performant as they’d expected. It may have surprising performance or security characteristics, and having to come back and really rethink a lot of their development processes, pipelines and tools to either account for that being a major part of their process, or to start to adapt their process more heavily, to limit or contain the way that they’re using those tools.

Let me just ask you, why is it an issue to have code written by AI not necessarily being able to be understood?

So the current standard of AI tooling has a relatively limited amount of context about your codebase. It can look at the current file or maybe a handful of others, and do its best to guess at what good code for that particular situation would look like. But it doesn’t have the full context of an engineer who knows the entire codebase, who understands the business systems, the underlying databases, data structures, networks, systems, security requirements. You said, ‘Write a function to do x,’ and it attempted to do that in whatever way it could. And if people are not reviewing that code properly, not altering it to fit those deeper problems, those deeper requirements, those things will catch up and start to cause issues.

Won’t that actually even cut away from the notion of moving faster and developing more quickly if all of this after-the-fact work has to be taken on?

Yeah, absolutely. I think most engineers would agree that over the lifespan of a codebase, the time you spend writing code versus fixing bugs, fixing performance issues, altering the code for new requirements, is lower. And so if we’re focused today purely on how fast we can get code into the system, we’re very much missing the long tail and often the hardest parts of software development come beyond just writing the initial code, right?

So when you talk about long term sustainability of the code, and perhaps AI not considering that, how is it that artificial intelligence will impact that long term sustainability?

I think there, in the short run, it’s going to have a negative impact. I think in the short run, we’re going to see real maintenance burdens, real challenges with the existing codebases, with codebases that have overly adopted AI-generated code. I think long term, there’s some interesting research and experiments being done, and how to fold observability data and more real time feedback about the operation of a platform back into some of these AI systems and allow them to understand the context in which the code is being run in. I haven’t seen any of these systems exist in a way that is actually operable yet, or runnable at scale in production, but I think long term there’s definitely some opportunity to broaden the view of these tools and provide more data that gives them more context. But as of today, we don’t really have most of those use cases or tools available to us.

So let’s go back to the original premise about artificial intelligence potentially destabilizing the pipeline. Where do you see that happening or the potential for it to happen, and what should people be wary of as they’re adopting AI to make sure that it doesn’t happen?

I think the biggest risk factors in the near term are performance and security issues. And I think in a more direct way, in some cases, just straight cost. I don’t expect the cost of these tools to be decreasing anytime soon. They’re all running at huge losses. The cost of AI-generated code is likely to go up. And so I think teams need to be paying a lot of attention to how much money they’re spending just to write a little bit of code, a little bit faster, but in a more in a more urgent sense, the security, the performance issues. The current solution for that is better code review, better internal tooling and testing, relying on the same techniques we were using without AI to understand our systems better. I think where it changes and where teams are going to need to adapt their processes if they’re adopting AI more heavily is to do those kinds of reviews earlier in the process. Today, a lot of teams do their code reviews after the code has been written and committed, and the initial developer has done early testing and released it to the team for broader testing. But I think with AI generated code, you’re going to need to do that as early as possible, because you can’t have the same faith that that is being done with the right context and the right believability. And so I think whatever capabilities and tools teams have for performance and security testing need to be done as the code is being written at the earliest stages of development, if they’re relying on AI to generate that code.

We hosted a panel discussion recently about using AI and testing, and one of the guys made a really funny point about it perhaps being a bridge too far that you have AI creating the code and then AI testing the code again, without having all the context of the entire codebase and everything else. So it seems like that would be a recipe for disaster. Just curious to get your take on that?

Yeah. I mean, if no one understands how the system is built, then we certainly can’t verify that it’s meeting the requirements, that it’s solving the real problems that we need. I think one of the things that gets lost when talking about AI generation for code and how AI is changing software development, is the reminder that we don’t write software for the sake of writing software. We write it to solve problems. We write it to enact something, to change something elsewhere in the world, and the code is a part of that. But if we can’t verify that we’re solving the right problem, that it’s solving the real customer need in the right way, then what are we doing? Like we’ve just spent a lot of time not really getting to the point of us having jobs, of us writing software, of us doing what we need to do. And so I think that’s where we have to continue to push, even regardless of the source of the code, ensuring we’re still solving the right problem, solving them in the right way, and meeting the customer needs.

The post Podcast: The negative long-term impacts of AI on software development pipelines appeared first on SD Times.

]]>
Report: Data is a barrier to AI project success https://sdtimes.com/ai/report-data-is-a-barrier-to-ai-project-success/ Mon, 30 Dec 2024 19:14:06 +0000 https://sdtimes.com/?p=56296 High-quality data is the key to a successful AI project, but it appears that many IT leaders aren’t taking the necessary steps to ensure data quality. This is according to a new report from Hitachi Vantara, the State of Data Infrastructure Survey, which includes responses from 1,200 IT decision makers from 15 countries.  The report … continue reading

The post Report: Data is a barrier to AI project success appeared first on SD Times.

]]>
High-quality data is the key to a successful AI project, but it appears that many IT leaders aren’t taking the necessary steps to ensure data quality.

This is according to a new report from Hitachi Vantara, the State of Data Infrastructure Survey, which includes responses from 1,200 IT decision makers from 15 countries. 

The report found that 37% of respondents said that data was their top concern, with 41% of U.S. respondents agreeing that “‘using high-quality data’ was the most common reason provided for why AI projects were successful both in the U.S. and globally.”

Hitachi Vantara also predicts that the amount of storage needed for data will increase by 122% by 2026, indicating that storing, managing, and tagging data is becoming more difficult. 

Challenges are already presenting themselves, and 38% of respondents say data is available to them the majority of the time. Only 33% said that the majority of their AI outputs are accurate 80% said that the majority of their data is unstructured, which could make things even more difficult as data volumes increase, Hitachi Vantara explained.

Further, 47% don’t tag data for visualization, only 37% are working on enhancing training data quality, and 26% don’t review datasets for quality.  

The company also found that security is a top priority, with 54% saying it’s their highest area of concern within their infrastructure. Seventy-four percent agree that a significant data loss would be catastrophic to operations, and 73% have concerns about hackers having access to AI-enhanced tools.

And finally, AI strategy isn’t factoring in sustainability concerns or ROI. Only 32% said that sustainability was a top priority and 30% said that they were prioritizing ROI of AI. 

Sixty-one percent of large companies are developing general LLMs instead of smaller, specialized models that could consume 100 times less power. 

“The adoption of AI depends very heavily on trust of users in the system and in the output. If your early experiences are tainted, it taints your future capabilities,” said Simon Ninan, senior vice president of business strategy at Hitachi Vantara. “Many people are jumping into AI without a defined strategy or outcome in mind because they don’t want to be left behind, but the success of AI depends on several key factors, including going into projects with clearly defined use cases and ROI targets. It also means investing in modern infrastructure that is better equipped at handling massive data sets in a way that prioritizes data resiliency and energy efficiency. In the long run, infrastructure built without sustainability in mind will likely need rebuilding to adhere to future sustainability regulations.

The post Report: Data is a barrier to AI project success appeared first on SD Times.

]]>
GitHub Copilot Free launches to expand reach of platform to all developers https://sdtimes.com/ai/github-copilot-free-launches-to-expand-reach-of-platform-to-all-developers/ Thu, 19 Dec 2024 15:43:49 +0000 https://sdtimes.com/?p=56246 GitHub has announced a free tier of GitHub Copilot to expand the platform’s reach to more developers. “We couldn’t be more excited to make Copilot available to the 150M developers on GitHub,” Thomas Dohmke, CEO of GitHub, wrote in a post.  The free tier provides access to 2,000 code completions and 50 chat messages per … continue reading

The post GitHub Copilot Free launches to expand reach of platform to all developers appeared first on SD Times.

]]>
GitHub has announced a free tier of GitHub Copilot to expand the platform’s reach to more developers.

“We couldn’t be more excited to make Copilot available to the 150M developers on GitHub,” Thomas Dohmke, CEO of GitHub, wrote in a post

The free tier provides access to 2,000 code completions and 50 chat messages per month. It is integrated into VS Code and allows the developer to choose between Anthropic’s Claude 3.5 Sonnet or OpenAI’s GPT-4o model. 

It provides access to many of the same features paid tiers have, such as debugging help, code explanations, the ability to turn comments into code, inline chat, commit message generation, and more.

However, it doesn’t include some advanced features like the ability to attach a knowledge base to chat or set guidelines for code reviews, or many management, security, and governance features.

The company also clarified that students, educators, and open source maintainers will still retain their free access to unlimited Copilot Pro accounts. 

The post GitHub Copilot Free launches to expand reach of platform to all developers appeared first on SD Times.

]]>