Four ways criminals could use AI to target more victims

Image for the article titled Four Ways Criminals Could Use AI to Target More Victims

Illustration: Golden day (Shutterstock)

Artificial intelligence (AI) warnings are ubiquitous right now. They included scary messages about the potential of artificial intelligence to cause the extinction of humans, invoking images from the Terminator movies. British Prime Minister Rishi Sunak even has hold a summit to discuss AI security.

However, we have been using AI tools for a long time from the algorithms we used to recommend relevant products on shopping websites, to cars with technology that recognize road signs AND provides lane positioning. Artificial intelligence is a tool for increasing efficiency, processing and sorting large volumes of data and easing the decision-making process.

However, these tools are open to everyone, including criminals. And we were already seeing early stage adoption of AI by criminals. Deepfake technology has been used for generate pornographic revengeFor example.

Technology improves the efficiency of criminal activity. It allows offenders to target more people and helps them be more plausible. Observing how criminals have adapted and adopted technological advances in the past can provide some clues about how they might use AI.

1. A better phishing hook

AI tools like Chat GPT AND Googles Bard provide writing support, for example enabling inexperienced writers to create effective marketing messages. However, this technology could also help criminals appear more credible when contacting potential victims.

Think of all those poorly written and easily detectable spam phishing emails and messages. Being plausible is the key to being able to get information from a victim.

Phishing is a numbers game: an estimated 3.4 billion spam emails are sent every day. My calculations show that if criminals were able to enhance their messaging so that just 0.000005% of them convinced someone to disclose information, that would result in 6.2 million more phishing victims each year.

2. Automated Interactions

One of the earliest uses of AI tools was to automate interactions between customers and services via text messaging, chat, and the phone. This has enabled faster customer response and improved business efficiency. Your first contact with an organization is likely to be with an AI system, before you can speak to a human.

Criminals can use the same tools to create automated interactions with large numbers of potential victims, on a scale not possible if it was only performed by humans. They can impersonate legitimate services like banks over the phone and via email, in an attempt to obtain information that would allow them to steal your money.

3. Deepfakes

AI is really good at generating mathematical models that can be trained on large amounts of real-world data, making those models better at a given task. Deepfake technology in video and audio is one example. A deepfake act called Metaphysicsrecently demonstrated the potential of the technology by featuring a video of Simon Cowell sings an opera on the television show Americas Got Talent.

This technology is beyond the reach of most criminals, but the ability to use artificial intelligence to mimic the way a person replies to messages, writes emails, leaves voice notes or makes phone calls is available for free using the ‘artificial intelligence. So is the data to train him, which can be gleaned from social media videos, for example.

Deepfake act Metaphysic performs on Americas Got Talent.

Social media has always been a hotspot for criminals to extract information about potential targets. There is now the potential for using AI to create a deepfake version of you. This deepfake can be exploited to interact with friends and family, convincing them to give criminals information about you. earn a a better view of your life just do it easier to guess password or pin.

4. Brute force

Another technique used by criminals called brute forcing could also benefit from AI. This is where many combinations of characters and symbols are tried in turn to see if they match your passwords.

That’s why long, complex passwords are more secure; they are more difficult

guess with this method. Brute forcing is resource intensive, but it’s easier if you know something about the person. For example, this allows lists of potential passwords to be sorted by priority, increasing the efficiency of the process. For example, they might start with combinations that refer to the names of family members or pets.

Algorithms trained on your data could be used to help build these priority lists more accurately and target many people at once, so fewer resources are needed. Specific artificial intelligence tools could be developed that collect your online data, then analyze it all to create a profile of you.

If, for example, you’ve been posting frequently on social media about Taylor Swift, manually examining your posts for password clues would be hard work. Automated tools do it quickly and efficiently. All of this information would go into the profile, making it easier for you to guess your password and pin.

Healthy skepticism

We shouldn’t be afraid of artificial intelligence, as it could bring real benefits to society. But as with any new technology, society must adapt and understand it. Even though we now take smartphones for granted, society has had to adjust to having them in our lives. They have been largely helpful, but uncertainties remain, such as a good amount of screen time for children.

As individuals, we should be proactive in our attempts to understand AI, not complacent. We should develop our own approaches to it while maintaining a healthy sense of skepticism. We will have to consider how to verify the validity of what we are reading, hearing or seeing.

These simple acts will help society reap the benefits of AI while ensuring we can protect ourselves from potential harm.

Want to learn more about AI, chatbots and the future of machine learning? Check out our full coverage of artificial intelligenceor browse our guides at The best free AI art generators AND Everything we know about OpenAIs ChatGPT.

Daniel PrinceProfessor of Information Security, Lancaster University

This article is republished by The conversation licensed under Creative Commons. Read the original article.

#ways #criminals #target #victims
Image Source :

Leave a Comment