AI machines encode without human interference
DeepMind has created an AI system called AlphaCode that they claim writes computer programs so it can already compete with human developers.
Photo Credits: DepositPhotos
The system was tested with the help of tasks used in the training of programmers, and AI was among the 54% of the best.
The result marks a significant step for automatic coding, but it is pointed out that AlphaCode still does not have representative skills, but is closer to the average.
The company says that the system is still in the first stages of development and that the new results are therefore very significant, and represent a step closer to artificial intelligence that will be able to solve coding problems similarly, or even better than humans.
AlphaCode is tested on the tasks offered by Codeforces - a competitive platform that shares new problems every week, and developers are ranked based on the solutions they offer. These are challenges that differ from those encountered when coding commercial applications, as they require broader knowledge - something like specific puzzles that combine logic, math, and computer programming.
Ten such challenges were shipped directly to AlphaCode, in the same format that human developers received. The system then created a number of potential answers and then narrowed the choice with the same verification methods used by man. The whole process is automatic, without human interference.
Five thousand users took part in solving all 10 challenges from the Codeforces platform, and AlphaCode was in the more successful half. The creators say that the results exceeded expectations since solving does not only mean applying algorithms but also devising those that are most suitable for a given problem.
Programming could one day be fully automated, and many companies are already working on it. For example, Microsoft has OpenAI that completes started code chains, so it works similarly to Gmail Smart Compose - it recommends how to continue starting code, just as Gmail offers proper text continuation when writing emails.
AI coding systems have advanced a lot in recent years, but they are still a long way from taking over from ordinary developers - the code they produce is often full of errors, and they often copy copyrighted material.
One study found that a coding tool called Copilot makes code that contains errors - as much as 40% of the end result is fraught with security vulnerabilities. Experts point out that cybercriminals could use it in the future, and distribute code with hidden flaws. AI systems that learn from them could later use them when writing programs, which would copy security errors. Due to such challenges, artificial intelligence in programming will serve for at least some time as an assistant whose suggestions are treated with caution.https://youtu.be/Zqej-qDRu5Q