Take note of your current approach to scam warnings and phishing issues, and replicate successful elements to establish AI scam training across your company. Don't just assume that your teams will know what to look for. Scams are getting sophisticated, and you need the right training to stay ahead.
Don't just view these as 'standard' scams because they're not, and your teams may not be familiar with how the scams work. Invest in training regularly to ensure you're always one step ahead as a team in terms of knowing what to be on the lookout for.
The AI scams and deepfakes protection requires constructing systems that do not involve using appearance or familiarity to give the go-ahead. With my experience in the area of digital security and high-risk platforms, the greatest weakness is not the technology, but it is human trust that is accessed by something that sounds or appears correct. The answer is to eliminate single points of failure. No email, voice, or video should be used to base any transaction, access request, or approval. Use strong, multi-factor authentication, hard approval flows, and segregation of duties between teams. In case one happens to be fooled, then the system must be in a position to make them not cause harm to themselves. This is not theoretical. The identity faking tools are already available and are getting better. You should not base your defense on detecting the fake. It must be created in such a way that the fake cannot do anything, even when nobody realizes that it is fake.