Microsoft revealed that 47,000 developers generate nearly 30 thousand bugs a month
According to the development life cycle blog post released by the Microsoft security team, Microsoft revealed that on average, 47,000 developers generate nearly 30 thousand bugs a month.
The highest priority for Microsoft is errors involving security, but so many errors are very difficult to classify manually so Microsoft is thinking of other ways.
The latest solution of the Microsoft security team is to use artificial intelligence for troubleshooting and perform automatic detection for classification after pre-training machine learning models.
Microsoft said that its current machine learning model has been able to correctly identify 99% of security and non-security errors during the cycle, and it can also identify 97% of critical vulnerabilities.
The fundamental purpose of Microsoft’s creation of artificial intelligence to identify errors is to increase efficiency and reduce human resources investment.
In the past 20 years, Microsoft collected 13 million work items and errors. Microsoft used these data to train machine learning models and used them to create analysis programs.
Microsoft says the company uses a method called supervised learning to teach machine learning models on how to correctly classify data from pre-labeled data.
After the training is completed, the model can be used to identify unknown data, classify vulnerabilities at different levels, and then hand over to developers to fix security vulnerabilities first.
Although Microsoft cannot rely entirely on artificial intelligence to troubleshoot classification errors at this stage, the accuracy of artificial intelligence will become higher and higher over time.
At present, Microsoft still has to rely on security experts to organize regular training and retrain and evaluate security models. In the future, such training may also gradually decrease.
Microsoft senior security program managers and Microsoft data and application scientists expressed their confidence in finding more vulnerabilities in the future and then classifying accuracy.
In addition, Microsoft also said that it will share its methods through GitHub in the next few months when the machine learning model is likely to be open source and shared with the community.