Future Tech

Using AI in your tech stack? Accuracy and reliability a worry for most

Tan KW
Publish date: Tue, 17 Sep 2024, 09:59 PM
Tan KW
0 478,433
Future Tech

Researchers are finding that most companies integrating AI into their tech stack have run headlong into performance and reliability issues with the resulting applications.

Borked applications are not a new phenomenon - there are any number of trendy methodologies and development approaches that can be blamed for a marked downturn in quality - yet the problem appears to be getting worse as companies turn to AI to create their applications without giving sufficient thought to the quality of the output.

Research published by Leapwork, drawn from the feedback of 401 respondents across the US and UK, noted that while 85 percent had integrated AI apps into their tech stacks, 68 percent had experienced performance, accuracy, and reliability issues.

The 401 respondents were 201 C-suite executives (CTO and CIO), and 200 technical leads (for example, IT managers.)

Some notable outages in recent months were due to insufficient or inadequate testing - the CrowdStrike incident was at least partially down to some doubtful practices at the cyber security company - and although AI might be seen as a panacea for companies seeking to increase productivity or cut costs (depending on your perspective), testing processes must similarly evolve.

According to the research, only 16 percent of companies reckoned their testing processes were efficient.

AI technologies are making rapid inroads into the developer world. In April 2024, Gartner claimed that 75 percent of enterprise software engineers would be using AI code assistants by 2028.

This would - if the forecast is accurate - represent a huge jump from the 10 percent recorded in early 2023.

That said, the quality of the suggestions is a cause for concern. Google was recently caught indexing inaccurate infrastructure-as-code examples while numerous organizations have outright banned LLM-bot generated code.

Leapwork has skin the game - it is all about test automation and has, as is de rigueur nowadays, an "AI-powered visual test automation platform."

However, the report makes some salient points as companies rush to adopt AI technologies in the hope of realizing promised productivity gains. Robert Salesas, CTO at Leapwork, said, "For all its advancements, AI has limitations, and I think people are coming around to that fact pretty quickly.

"The rapid automation enabled by AI can dramatically increase output, but without thorough testing, this could also lead to more software vulnerabilities, especially in untested applications."

Indeed it could. Almost a third (30 percent) of C-suite executives said they did not believe their current testing processes would ensure reliable AI applications.

One approach is to put AI to work as part of testing, using AI-augmented testing tools. However, despite some trust in their results (64 percent of C-suite respondents liked what they saw, compared to 72 percent of technical teams), prudence remains the watchword: 68 percent of C-suite executives believe human validation would continue to be essential.

The research shows that a headlong charge into AI assistance might result in more applications being churned out, but uncertain quality and unsuitable testing processes mean that devs need to give thought to how they validate those applications and integrations. ®

 

https://www.theregister.com//2024/09/17/ai_is_great_for_churning/

Discussions
Be the first to like this. Showing 0 of 0 comments

Post a Comment