How to Avoid Common AI Scams

In the evolving AI landscape, learn how to spot and avoid scams.

Key Takeaways

  • Cybercriminals are using AI to carry out, as well as increase the scale of, a variety of attacks such as phishing, vishing and password hacking.
  • Cybersecurity organizations also increasingly rely on AI to help flag suspicious data to detect or thwart attacks.
  • To help keep your data safe, discover common scams and learn how to protect yourself. 

If you recently used your car’s GPS system, auto-correct when writing an email or ChatGPT to ask a question, you’ve leveraged artificial intelligence (AI).

 

With a rapidly evolving digital landscape, AI will continue to play a pivotal role in daily life. So, let’s discuss the basics of AI, common AI scams, how cybersecurity teams are leveraging AI to thwart attacks, and the steps you can take to protect your assets and information.

What is AI?

AI is a broad term that refers to the science of simulating human intelligence in machines with the goal of enabling them to think like us and mimic our actions. This would allow AI to perform tasks that previously only human beings could handle, or in some cases, surpass human beings’ capabilities.

 

Many AI platforms attempt to determine the best way to achieve an outcome or solve a problem. They typically do this by analyzing enormous amounts of training data and then finding patterns in the data to replicate in their own decision-making.

How are scammers leveraging AI?

Unfortunately, cybercriminals are relentless and resourceful. Let’s look at four ways they’re using AI for their own benefit and how you can avoid common scams.

 

  1. 1
    Social engineering schemes:

    These schemes rely on psychological manipulation to trick individuals into revealing sensitive information or making other security mistakes. They include a broad range of fraudulent activity categories, including phishing, vishing and business email compromise scams.

     

    AI allows cybercriminals to automate many of the processes used in social engineering attacks, as well as create more personalized, sophisticated and effective messaging to fool unsuspecting victims. This means cybercriminals can generate a greater volume of attacks in less time — and experience a higher success rate.

     

    How to avoid: Always avoid giving out your personal information in response to unsolicited phone calls, texts and emails. Before responding to any individuals requesting data, make sure the person asking for the information is from a legitimate organization and is who they claim to be. You can always hang up and call the organization back using a phone number found through a trusted source – such as the company’s mobile application, official website or a financial statement.

  2. 2
    Password hacking:

    Cybercriminals exploit AI to improve the algorithms they use for deciphering passwords. The enhanced algorithms provide quicker and more accurate password guessing, which allows hackers to become more efficient and profitable. This may lead to an even greater emphasis on password hacking by cybercriminals.

     

    How to avoid: Don’t reuse the same or similar passwords across multiple websites and applications. When you do, if a hacker compromises one of your accounts, all of your other accounts using that same password could be vulnerable. Additionally, consider using a password manager, which will create unique, lengthy and complex passwords for you and then store them in an encrypted state. Furthermore, enable Multi-Factor Authentication (MFA) to log in to any website or application you use for financial transactions or that has access to your personal data. MFA is essentially another factor — beyond your username and password — used to verify your identity and protect access to your account. Common forms of MFA include authentication via one-time security codes, biometrics and authenticator apps.

  3. 3
    Deepfakes:

    This type of deception leverages AI’s ability to easily manipulate visual or audio content and make it seem legitimate. This includes using phony audio and video to impersonate another individual. The doctored content can then be broadly distributed online in seconds — including on social media platforms — to create stress, fear or confusion among those who consume it.

     

    How to avoid: Be cautious of urgent voice or video messages asking for money or personal information. Additionally, only answer phone calls from numbers you recognize. You should never give out your personal information over the phone or via direct messages when you receive an unsolicited communication. Before you respond, make sure the person asking for the information is who they claim to be. 

  4. 4
    AI-powered investment scams:

    Bad actors leverage AI to create fake websites and social media profiles that advertise fraudulent investment opportunities. Often, these investments promise to be low risk with a high return. Once their victims make an “investment” the fraudsters disappear, with no return of the money.

     

    How to avoid: Avoid making investment decisions solely on information promoted through social media platforms, especially if the investment opportunities seem too good to be true. Always seek professional advice when it comes to investment opportunities and authenticate the details of an investment before transferring any funds.

How AI can enhance cybersecurity efforts

AI is reshaping nearly every industry and cybersecurity is no exception. Many companies are increasing their efforts to mitigate AI-related risks that pertain to inaccuracy, cybersecurity and intellectual property infringement.1 Because of the nature of AI, which can analyze enormous sets of data and find patterns, AI is uniquely suited to handle tasks such as:

 

  • Detecting cyberattacks more accurately than humans, creating fewer false-positive results, and prioritizing responses based on their real-world risks.
  • Identifying and flagging suspicious emails and messages often employed in phishing campaigns.
  • Simulating social engineering attacks, which help security teams spot potential vulnerabilities before cybercriminals exploit them.
  • Analyzing huge amounts of incident-related data rapidly, so that security teams can swiftly take action to contain the threat.

 

Additionally, AI has the potential to be a game-changing tool in penetration testing — intentionally probing the defenses of software and networks to identify weaknesses. By developing AI tools to target their own technology, organizations will be better able to identify their weaknesses before hackers can maliciously exploit them.

 

Having this intelligence would provide organizations with a significant edge in preventing future attacks. Stopping breaches before they occur would not only help protect the data of individuals and companies, but also lower IT costs for businesses. 

Staying secure in a changing AI environment

As AI evolves, concerns about data privacy and risk management for both individuals and businesses continue to grow. Regulators are considering ways to develop AI and maximize its benefits while reducing the likelihood of negative impacts to society. However, there currently isn’t any comprehensive AI federal legislation in the United States.

 

So, what does all this mean to you? How do the advancements in AI impact your life from a security perspective? Fortunately, the answer is surprisingly simple. You should review your current cybersecurity strategy and make sure it follows best practices in critical areas such as passwords management, data privacy personal cybersecurity and social engineering.

 

In addition, with scams on the rise, protecting your assets and personal information remains our top priority. One way to protect yourself or your loved ones’ accounts from fraud and financial scams is to add a trusted contact. A trusted contact is a person you designate to be contacted if we are unable to reach you or if there are concerns regarding your well-being or potential financial exploitation. It is important to note that a trusted contact does not have permission to access account details, make decisions or perform any actions on your behalf. This individual serves as an additional layer of defense in case issues arise.

 

As always, it’s a good idea to regularly visit our Security Center for updates regarding AI and our latest cybersecurity tips. By staying secure, it makes it easier for all of us to enjoy the conveniences and other enhancements in our daily lives made possible by AI.

Security Center

Learn more about cybersecurity

Report an Online Security Concern

If you suspect you may be the victim of fraud or identity theft, or if you notice suspicious account activity or receive a questionable email or text that appears to be from Morgan Stanley, please contact us immediately at
888-454-3965
(24 hours a day, 7 days a week)
For international clients, please contact your Morgan Stanley Client Representative immediately to report any online fraud or security concerns.