We are entering the age of what Tesla AI director Andrej Karpathy calls “Software 2.0,” where neural networks write the code and people’s main jobs are defining the tasks, collecting the data, and building the user interfaces.
But not all tasks can be tackled by neural networks — at least, not yet — and traditional software development still has a role to play. Even there, however, artificial intelligence, machine learning, and advanced analytics are changing the way that software is designed, written, tested, and deployed.
Brazil-based TOTVS provides mission-critical industry software for about 100,000 enterprise customers. For example, trillions of dollars are transacted each day in its financial services solutions.
Such applications require capable testing. Test case creators need to be extremely deliberate about how they design testing scenarios, each one taking several hours to create.
“Imagine rewriting thousands and thousands of use cases,” Goetten says.
TOTVS turned to artificial intelligence for help. The platform TOTVS uses to run tests, Functionize, now supports the intelligent creation of test cases. The technology can look at a screen the way a human does to identify where input fields and buttons are, instead of relying on the underlying code. It can also come up with testing scenarios and sample data to stress applications.
“Before, a senior QA would take a day to complete a test case in the legacy solution we were using,” he says. “Now, in minutes they can create the same test case.”
The latest addition is the ability to understand plain English, says Gotten.
“You can tell it what to test, and it will automatically create a test case for you,” he says. “This is opening a whole new door for us. We can have less senior QAs write test cases for us.”
Monitoring and deployment
Even when software makes it through QA, it doesn’t always work as intended. “Just this morning, we had some product data that was introduced and the website was not ready to handle this data,” says Patrick Berry, senior director of technology at Build.com, an online home improvement retailer.
Hundreds of hours went into monitoring the performance of Build.com’s software, and when a problem arose, the company reverted the software back to a previously known good state and sent it to developers to fix the issues.
“The problem that we faced was that the software we write was getting so complex and at a scale of traffic where it was beyond any one person or even a team of people to look at all the monitoring systems we have in place and say, ‘Things are good,’ or ‘Things are bad; do something now,'” says Berry. “It soaked up too much time and slowed down the releases. We could not get value to customers fast enough and we weren’t getting feedback to developers fast enough that things needed to be remediated.”
So Build.com moved to Harness, a software-delivery-as-a-service platform, which dropped the time spent on performance monitoring to almost zero, increasing the speed of deployment twenty-fold, he says. Now, if there’s a problem, the system automatically reverts to a prior, known-good state, and the issue is sent off for remediation, based on built-in machine learning capabilities. Build.com is also looking at using more AI as part of the code development process, as well.
“We don’t actually have code writing code yet,” he says. “But where AI and ML can lend a hand on the development side, it’s really about understanding what common patterns are good or bad. It can highlight that this is an anomaly, and we can go back and remediate that.”
Berry also hopes to see more tools that leverage AI to help companies write better, more secure code in the first place.
“That’s where we’re really looking to use artificial intelligence and machine learning on the development front — to augment those areas where you can’t just throw enough people at the problem,” he says. “Say, your code base has millions of lines of code. How many people are you going to throw at auditing those millions of lines of code? We need solutions that scale.”
For example, Build.com uses GitHub as its code repository. “They are introducing certain systems that will monitor your code and alert you to potential vulnerabilities in the third-party libraries we utilize,” he says.
This is an active area of development for GitHub, says Omoju Miller, machine learning engineer at GitHub. “We are working on building models that support common vulnerabilities and exposures discovery,” she says.
GitHub also just released a tool to help developers spot where they’ve accidentally shared their tokens in code, she says.
GitHub is also working on tools that “help developers discover functions in a natural way,” Miller says. With AI, developers can search for functions based on their intent, she says.
“Using the vast amount of publicly available code on GitHub’s open source coding platform, the machine learning research team has made significant progress in enabling this,” she says. “With semantic code search, the developer can augment and simplify their computational problem-solving needs.”
That means developers will no longer be limited by what they know, Miller says. “They can leverage all the knowledge about code that is stored on GitHub to help them solve their problem.”
AI technologies are also showing up in static and dynamic software analysis tools, says Ray Wang, principal analyst and founder at Constellation Research.
“The machine learning capabilities are already a lot richer than where we were 18 months ago,” he says, “And we’re starting to see neural nets being applied. Right now, it’s more for static analysis than dynamic, but we’ll see the emergence of [AI-powered] dynamic analysis in the next few years.”
When it comes to writing new code from scratch, however, current technology leaves something to be desired, says Build.com’s Berry.
“There are certain systems out there now, your integrated development environments, but it’s more like cut-and-paste with internally built templates,” he says.
But that is starting to change. The most popular IDE, Microsoft’s Visual Studio, has AI-assisted code completion built into its latest version, released in April. The functionality is based on machine learning from thousands of open-source GitHub repositories, according to Mark Wilson-Thomas, Microsoft’s senior program manager for Visual Studio IntelliCode.
“We distill the wisdom of the open source community’s code,” says Amanda Silver, Microsoft’s partner director of program management for Visual Studio and Visual Studio Code.
It also helps them understand how common classes are used, she adds, “which is especially useful when working on unfamiliar code.”
In a recent survey of IntelliCode users, more than 70 percent reported that they felt more productive when using the new AI-powered IntelliCode versus the classic IntelliSense, she says.
Enterprises using this tool can also create custom, private models for their own employees, she says.
“This enables IntelliCode to speak the local dialect of your team or organization without transmitting your source code to Microsoft,” she says.
Gartner analyst Svetlana Sicular says that this kind of functionality is why Microsoft bought GitHub in the first place.
GitHub, which hosts more than 100 million repositories — over 25 million of which are open source — was bought by Microsoft last year. The platform is free for public repositories, as well as for small private projects.
“GitHub is a depository of code,” says Sicular . “My theory is that Microsoft will use it to generate new code.”
Smart application development platforms
Build.com’s Berry is also keeping an eye out on what’s happening in the low-code and no-code space.
“It’s not new at all,” he says. “Developers have been plugging together systems as long as there has been development.”
More recently, this has made deploying AI systems easier and faster, he says. “Pre-canned solutions for recommendation engines, for example, turned what used to be very difficult and custom-built solutions into commodities,” he says.
Now, this low-code approach is getting even more intelligent, allowing companies to stop wasting time on building commodity systems, he says. “It’s really about giving us an opportunity to come up with brand-new, genuine innovations. I’m very excited about what these fields can provide for us going forward,” he says.
Take, for example, Mendix. The company has been offering a building-block system for creating applications for about a decade. Developers snap together functionality from the options available on the platform, and, when those aren’t enough, link to external code for the missing pieces. Now, the company has built a deep learning system to analyze these models, look at their behavior in production to see which ones are the most successful, and identify patterns based on those models.
Still, there is resistance from IT departments to using these platforms, and a lack of confidence from the business side, says Vikram Kunchala, application security leader at Deloitte Cyber.
“Adoption seems to be more of a curiosity at this point,” he says. “Enterprises are experimenting with it in smaller chunks. Or if they have to get something out fast — we’ve seen some of that. But I haven’t seen any client adopt this as an enterprise standard that I’m aware of.”
But the biggest change of all is the move towards applications that don’t have any connection to traditional code at all.
Say, for example, you want to build an app that plays Tic-Tac-Toe. You could program in the rules and a game-playing strategy. If the opponent does this, you do that. The developer’s job is to pick the right strategy and to create an attractive user interface.
If the goal is to beat human players, this strategy works for Tic-Tac-Toe, checkers, and even chess. But for more difficult games, like Go, creating a set of rules is too difficult. Enter AI technologies such as deep learning and neural networks, which turn the software process on its head.
Instead of starting with the rules, developers start with data — large numbers of games. With AlphaGo, Google trained the system on thousands of human games. With the latest version, AlphaGo Zero, the training data was games that the system played against itself, starting with random moves.
As long as the training data is clear and there’s enough of it, and the criteria for success or failure are also clear, then this approach has the potential to revolutionize software development. Developers, instead of trying to figure out and code the rules of the game, now have to work on managing training data and success criteria and leave the actual coding to the system.
That’s exactly the approach Tesla is taking with its self-driving cars, says Andrej Karpathy, director of AI at Tesla.
“This is a whole new way of designing software,” he said in a technology conference keynote last year. “Now, instead of writing code explicitly, we are instead accumulating and massaging data sets, and those are effectively the code.”
So, for example, Tesla’s windshield wipers had trouble knowing when to turn on and off when the cars were driving through tunnels. In traditional software development, the programmers would go and look through the code to find where the faulty logic was. With Software 2.0, the developers look at the data, instead.
In this particular case, for example, there wasn’t enough training data for cars driving through tunnels. Tesla had to go out, get more data, annotate that data, add it to the training data set, and re-run the deep learning algorithms.
“We made all the problems look the same with this approach,” says Karpathy.
There’s still room for traditional development, he adds. Currently, the user interfaces for these systems and the integrations with other platforms are still built manually.
But as more companies turn to AI for those applications where there is lots of data available and low-code platforms for the rest, then the work of software development will transform dramatically in the very near future.