First Midwest BankFirst Midwest Bank logoArrow DownIcon of an arrow pointing downwardsArrow LeftIcon of an arrow pointing to the leftArrow RightIcon of an arrow pointing to the rightArrow UpIcon of an arrow pointing upwardsBank IconIcon of a bank buildingCheck IconIcon of a bank checkCheckmark IconIcon of a checkmarkCredit-Card IconIcon of a credit-cardFunds IconIcon of hands holding a bag of moneyAlert IconIcon of an exclaimation markIdea IconIcon of a bright light bulbKey IconIcon of a keyLock IconIcon of a padlockMail IconIcon of an envelopeMobile Banking IconIcon of a mobile phone with a dollar sign in a speech bubbleMoney in Home IconIcon of a dollar sign inside of a housePhone IconIcon of a phone handsetPlanning IconIcon of a compassReload IconIcon of two arrows pointing head to tail in a circleSearch IconIcon of a magnifying glassFacebook IconIcon of the Facebook logoLinkedIn IconIcon of the LinkedIn LogoXX Symbol, typically used to close a menu
Skip to nav Skip to content
Not Insured by FDIC or Any Other Government Agency
Not Bank Guaranteed
Not Bank Deposits or Obligations
May Lose Value

How Artificial Intelligence is Used in Cyberattacks

Cybercriminals Take Advantage of Artificial Intelligence Tools

According to the FBI, cybercriminals are becoming more sophisticated in their attacks by leveraging artificial intelligence (AI) to conduct social engineering attacks. Cybercriminals can now use these tools to create scams uniquely tailored to specific victims and use them to remove common phishing tell-tale flags – whilst faking familiar voices, faces, and mannerisms. Overall, this increases the chances for successful attacks that lead to financial fraud and data theft.

 

Artificial Intelligence in Phishing Emails

Cybercriminals use AI to analyze vast amounts of public data, to craft emails that include specific details about the victim such as but is not limited to name, job title, recent activities, and email address – making the phishing attempt seem much more relevant and legitimate.

 

Potential Dangers of AI-Powered Phishing:

  • Increased Success Rate: Highly personalized phishing attempts are more likely to trick users into clicking malicious links or providing sensitive information.

  • Difficult to Detect: AI-generated phishing emails can appear very similar to legitimate communications, making them harder to identify as fraudulent.

  • Wider Reach: AI allows cybercriminals to automate phishing attacks, which helps them target a larger pool of potential victims.

 

Artificial Intelligence in Voice/Video Scams

AI is increasingly being used in voice and video scams as well, leveraging these advanced tools to deceive victims to commit financial fraud and steal sensitive information. AI tools are used to create realistic fake videos (deepfakes) or audio by mimicking a person’s voice, facial expressions, and mannerisms. Cybercriminals use these to impersonate trusted individuals such as family members, celebrities, or executives and entice their victims into divulging sensitive information or providing financial compensation. For example, be on the lookout for deepfakes impersonating your trusted contacts using deepfakes to try to get financial information. Anything involving PII should be done in person or using known/verified callback options.

 

Potential Dangers of AI-Powered Voice/Video Scams

  • Increased Realism: AI-generated content is often indistinguishable from real audio or video, making it harder for victims to detect scams.

  • Erosion of Trust: The ability to replicate voices and videos undermines trust in digital communication, as people can no longer rely on what they see or hear.

  • Targeted Exploitation: Scammers can use AI to target vulnerable individuals, such as the elderly or those unfamiliar with technology, increasing the likelihood of success.

  • Financial and Emotional Harm: Victims may lose money, sensitive information, or suffer emotional distress from being deceived by someone they “trust.”

 

How to Protect Yourself

  • Stay Informed: Keep up with the latest scam trends and tactics used by cybercriminals.

  • Use Strong Passwords and MFA: Create complex passwords for each account and enable multi-factor authentication to add an additional layer of security and verification.

  • Be Skeptical: Avoid clicking on links or downloading attachments from unknown sources. Verify the contact information of anyone who is attempting to reach out to you.

  • Use a Safe Word: Establish a family or team “safe word” to confirm identities during sensitive communications.

  • Be Cautious of Urgent Requests: Scammers often create a sense of urgency, so take a moment to assess the situation before acting.

 

Key Takeaways

With the emergence of AI, cybercriminals have made it a point to take full advantage of these tools to create malicious, sophisticated cyberattacks that are increasingly difficult to detect. Cybercriminals will use any tools at their disposal to commit heinous cybercrimes for financial gain. While AI-based cyberattacks are a growing concern, they are not insurmountable. By combining technology, education, and vigilance, you can protect yourself and your families from falling victim to these sophisticated attacks. The key is to remain proactive, skeptical, and informed in this evolving digital landscape.

 

 


 

This material is for general information only and is not intended to provide specific advice or recommendations for any individual. This material was prepared by LPL Financial, LLC
Tracking #711333 Exp 03/2027

Check the background of the financial advisors associated with this site on FINRA's BrokerCheck.

Old National Wealth Management is the umbrella marketing name/logo for wealth-related services, including Old National Wealth Advisors, Old National Private Banking and 1834 services. Old National Wealth Management, Old National Private Banking, and 1834 are not affiliated with LPL Financial.

Old National Wealth Advisors: Your Bank (“Financial Institution”) provides referrals to financial professionals of LPL Financial LLC (“LPL”) pursuant to an agreement that allows LPL to pay the Financial Institution for these referrals. This creates an incentive for the Financial Institution to make these referrals, resulting in a conflict of interest. The Financial Institution is not a current client of LPL for brokerage or advisory services. Please visit https://www.lpl.com/disclosures/is-lpl-relationship-disclosure.html for more detailed information.

Securities and advisory services are offered through LPL Financial (LPL), a registered investment advisor and broker-dealer (member FINRA/SIPC.). Insurance products are offered through LPL or its licensed affiliates. Old National Bank and Old National Wealth Advisors are not registered as a broker-dealer or investment advisor. Registered representatives of LPL offer products and services using Old National Wealth Advisors, and may also be employees of Old National Bank. These products and services are being offered through LPL or its affiliates, which are separate entities from, and not affiliates of, Old National Bank, Old National Private Banking or Old National Wealth Advisors. Securities and insurance offered through LPL or its affiliates are:

Not Insured by FDIC or Any Other Government Agency Not Bank Guaranteed Not Bank Deposits or Obligations May Lose Value

The LPL Financial registered representative(s) associated with this website may discuss and/or transact business only with residents of the states in which they are properly registered or licensed. No offers may be made or accepted from any resident of any other state.

LPL Financial Form CRS