Can AI able to prevent Fake News??

Thinking about ‘Artificial Intelligence’ we assume it as rocket science though it’s not.  I’m waiting for that day when we include ‘Artificial intelligence’ in school books instead of an essay ‘science in everyday life’. 
simply artificial intelligence is all about machine intelligence. Day by day our artificial intelligence technology is getting improved, more efficient, smarter. As machines are the most wonderful creation of humans, humans made them operate it like them. But the good thing about the machine is, if you slap a man, he will slap you back, but the machine won’t. Apart from the joke, if we categorize the machine, who tries to work like a Man but with intelligence. Man can be fraud, but a machine can’t. Man can grow fake news, the machine cannot, in case it’s programmed. 
“Fake news,” which is so highly pernicious in part because it superficially resembles the real thing. AI tools promise to help to identify it, researchers have found that the best way is for that AI to learn to create fake news itself — a double-edged sword, though perhaps not as dangerous as it sounds. 

A fake story might, for example, claim that a very high percentage of crimes in a European country is committed by foreign immigrants. In theory that might be an easy claim to disprove because of large troves of available open data, yet journalists waste valuable time in finding that data.
So Fandango’s tool links all kinds of European open data sources together, and bundles and visualizes it. Journalists can use, for example, pooled together national data to address claims about crimes or apply data from the European Copernicus satellites to climate change debates.

A new system ‘GROVER” created by the University of Washington and Allen Institute for AI (AI2) computer scientists that is extremely adept at writing convincing fake news on myriad topics and as many styles — and as a direct consequence is also no slouch at spotting it. 
Though it’s old to generate OpenAI made a splash recently by announcing that its text-generating AI was a rick to publish.  But Grover’s creators believe they’ll only get better at fighting generated fake news by putting the tools to create it out there to be studied.
The AI was created by having it ingest an enormous corpus of real news articles, a dataset called RealNews that is being introduced alongside Grover. The 120-gigabyte library contains articles, from the top 5,000 publications tracked by Google News.
By studying the style and content of millions of real news articles, Grover builds a complex model of how certain phrases or styles are used, what topics and features follow one another in an article, how they’re associated with different outlets, ideas, and so on.AL can also detect if one writhing is copied or used with another name.
A system called  “Adversarial setups” is a powerful force in AI research right now, often being used to create photo-realistic imagery from scratch.

Researchers at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) and QCRI developed a machine learning system that examines a range of sources — major outlets like CNN or Fox to low-traffic content suppliers — and rates them on factors such as language, sentence structure, complexity, and emphasis on features like fairness or loyalty.
The project used data from Media Bias/Fact Check (MBFC), whose human fact-checkers rate the accuracy of about 2,000 large and small sites, and so far, has built an open-source database of more than 1,000 sources with ratings for accuracy and bias.
UK startup Fabula AI reckons it’s devised a way for artificial intelligence to help user-generated content platforms get on top of the disinformation crisis that keeps rocking the world of social media with antisocial scandals. A quotation can relate this concept,
‘Fake news is not a mathematical question of algorithms and data, but a very philosophical question of how we deal with the truth.’ -Francesco Nucci, Engineering Group, Italy

And this is not just about detecting fake news. This is also a problem of trust and a lack of critical thinking. Researchers at Pakistan’s Information Technology University took another tack in their paper, 
Using Blockchain to Rein in The New Post-Truth World and Check The Spread of Fake News. They propose a blockchain-based framework for fake news prevention.

Nowadays People are losing trust in traditional media and institutions, and that’s not something that can be mitigated only through technology. 
Though it requires efforts from all stakeholders, still be hopeful that the project can play a part in this larger effort.

  • (Published on Core 2.0 on the event of Machelaration at IUT)

Leave a Reply

Your email address will not be published. Required fields are marked *