HRC Campus Media Fellow Published In Israel Hayom: Can AI Help Hasbara?

On May 8, 2023, one of our Campus Media Fellows, Yair Shpiler, was published in Israel Hayom about how while some people are using AI for trivial purposes, such as generating essays and creating fake songs, the power of this technology should be harnessed to tackle significant global issues such as discrimination and hate.


Can AI Help Hasbara?

By: Yair Shpiler

Artificial Intelligence (AI) has taken the world by storm since the public release of ChatGPT in late 2022. It’s no longer possible to have a conversation without someone mentioning AI and everyone seems to have an opinion on its potential benefits and dangers. Some people are excited about the recent developments, while others are terrified of the risks it poses, like misinformation, job loss, or even the destruction of humanity, as demonstrated by the infamous ChaosGPT.

Despite the worries, AI has enormous potential to be a force of good in fields like healthcare, education, and security. However, one crucial issue seems to have been overlooked in this technological revolution – the fight against antisemitism and the promotion of Israel advocacy.

While some people are using AI for trivial purposes, such as generating pick-up lines or creating fake songs, the power of this technology should be harnessed to tackle significant global issues such as discrimination and hate.

In today’s digital age, the internet is a vast and complex space, making it challenging to identify where the advocacy for truth is most needed. However, AI and deep learning software can help direct the efforts of those with the right intentions to the right place. With the help of AI, we can analyze vast amounts of data and generate reports on the platforms, groups, and threads where disinformation and antisemitism are most prevalent.

For example, a company like Cyabra is using AI to identify fake news and disinformation campaigns in the context of the Israeli-Palestinian conflict. Their platform uses natural language processing and machine learning to analyze social media activity and detect instances of false information or manipulated media. By categorizing these instances and predicting their potential impact, the company can help direct the efforts of advocacy organizations toward countering false narratives and promoting truth.

Similarly, CyberWell, a tech-for-good startup out of Israel uses AI to identify and compose the world’s first live database of online antisemitism. Utilizing the power of big data, the platforms flag posts from six different social media platforms and categorize them by types of antisemitism. Human experts then vet the content to ensure it is antisemitic under the IHRA definition. The purpose of the database is to facilitate significant investments in content moderation by social media platforms and the gaming industry.

On a recent trip to Israel,  I visited the Tel Aviv office of CyberWell and met with Founder and Executive Director, Tal-Or Cohen Montemayor. Tal-Or believes that AI is a natural tool and that in order to ensure that ethics and social consequences are accounted for in the development of new AI, there must be a requirement for active partnership with tech-for-good non-profit initiatives. She believes that “the current antisemitism crisis in part reflects the fact that social platforms are not informed enough about antisemitic symbols, terminology, and narratives. They may have the technological capability to capture antisemitic content, but they haven’t directed appropriate resources towards understanding it and following up with enough automated tech to help identify it on the platform in a comprehensive way.”

It is important to understand that the fight against online antisemitism is also the fight against “real-world” forms of antisemitism. Israelis, the Jewish people, and our allies are the primary target for digital antisemitism; spikes in online hate frequently mirror and even spur periods of violence in Israel, where most antisemitic terror attacks occur. Because of that, Tal-Or believes that we must continue to use data and tech to fight online antisemitism – especially advanced open-source intelligence, discourse analysis, and compliance methodology.

After we identify where our efforts are best spent, AI can help us generate content. Generative AI tools like ChatGPT can assist individuals with crafting a response to an inaccurate or hateful post with a comment that disputes the wrong claims and offers alternative points of view. AI can pick on many nuances that an average human being might not, such as the best tune and style of language to use in accordance with the audience (ex. the followers of the page), as well as the optimal length of the response.

AI can also be used to create engaging multimedia content. For instance, companies like Lumen5 have developed AI-powered video creation tools that can take written content and automatically generate engaging videos. The infamous graphic design tool, Canva, recently launched new AI tools including a text-to-image generator (It’s super cool, try it for yourself). The platform also uses machine learning to suggest design elements, fonts, and colors that will work well together.

These technologies can be especially useful for non-profit organizations and individual advocates with varying degrees of influence that want to create compelling videos and graphics to educate and inspire their followers but may not have the resources to hire a professional.

The potential of AI to fight antisemitism and promote Israeli advocacy is significant, and it is the duty of the Jewish people, including our allies to treat the millennia-old sickness that is antisemitism with the most advanced tools at our disposal. AI can assist advocates like myself and my peers to detect online hate, categorize it, generate effective responses, and present them in a compelling matter. However, it is important to ensure that it is used ethically and effectively. This means developing AI tools that are accurate, reliable, and transparent, and for countries to explore the possibility of establishing a regulatory body to monitor and regulate the development and use of AI, and encouraging collaboration between AI developers and non-profit organizations. Only then can we harness the full potential of AI to combat online Jew-hatred.

Yair Shpiler is an HonestReporting Canada Campus Media Fellow.

Comments

You may also like

Send this to a friend