1、SNYK REPORTMust Change Their Approach56.4%say insecure AI suggestions are common but few have changed processes to improve AI security.AI Code,Security,and Trust:OrganizationsExecutive SummaryIn a short period of time,AI code completion tools have gained significant market penetration.In our survey
2、of 537 software engineering and security team members and leaders,96%of teams use AI coding tools and over half of those teams use the tools most or all of the time.It is safe to say that AI coding tools are now part of the software supply chain at most organizations.The use of AI tools has likely a
3、ccelerated the pace of software code production and sped up new code deployment.On top of that,AI coding tools are polished and convincing.Unfortunately,this polish and ease-of-use has generated misplaced confidence in AI coding assistants and have created a herd mentality that AI coding is safe.In
4、reality,AI coding tools continue to consistently generate insecure code.Among respondents,91.6%said that AI coding tools generated insecure code suggestions at least some of the time.The risks of AI coding tools are magnified by the resulting accelerated pace of code development.This is particularly
5、 true in open source code,where keeping up with the latest security status of open source libraries and packages is challenging due to new insecurities and vulnerabilities landing on a seemingly daily basis.Despite these risks and challenges,our survey found that technology teams are not putting the
6、 proper measures and guardrails in place to best secure their code in this new AI coding age.Less than 10%of survey respondents have automated the majority of their security checks and scanning.80%of respondents said that developers in their organizations bypass AI security policies.Respondents are