Rajendra

I write columns on news related to bots, specially in the categories of Artificial Intelligence, bot startup, bot funding.I am also interested in recent developments in the fields of data science, machine learning and natural language processing ...

Full Bio 
Follow on

I write columns on news related to bots, specially in the categories of Artificial Intelligence, bot startup, bot funding.I am also interested in recent developments in the fields of data science, machine learning and natural language processing

This asset class turned Rs 1 lakh into Rs 625 crore in 7 years; make a wild guess!
715 days ago

Artificial intelligence is not our friend: Hillary Clinton is worried about the future of technology
719 days ago

More than 1 lakh scholarship on offer by Google, Know how to apply
720 days ago

Humans have some learning to do in an A.I. led world
720 days ago

Human Pilot Beats Artificial Intelligence In NASA's Drone Race
721 days ago

Google AI can create better machine-learning code than the researchers who made it
73155 views

More than 1 lakh scholarship on offer by Google, Know how to apply
59865 views

13-year-old Indian AI developer vows to train 100,000 coders
41331 views

Pornhub is using machine learning to automatically tag its 5 million videos
37530 views

Rise of the sex robots: Life-like doll goes on sale for 15,000 pound
33249 views

Bot vs. Bot Texas professors to develop fake-news-fighting software

By Rajendra |Email | Jul 25, 2017 | 5538 Views

Incensed by what he thought was a pedophilia ring headquartered in a Washington, D.C., pizza restaurant, a man opened fire inside Comet Ping Pong Pizza last year, sending employees and customers scrambling for cover.

The Dallas Morning News reports the shooting was real, but the sex ring  supposedly overseen by 2016 Democratic presidential candidate Hillary Clinton  was not. Instead, it was propaganda passed off as authentic through social media feeds and right-wing websites.

No one was hurt in the Dec. 4 shooting, and the suspect was sentenced in June to four years in prison.

Because of incidents like that one, a group of college instructors in North Texas believes combating fake news is a matter of national security. They're working on a proposal that would use technology to help root out false claims in the news.

"We decided to make national security the focus because of the potential interference in our election coming from Russia," said Chengkai Li, a University of Texas at Arlington associate professor in the Department of Computer Science and Engineering.

Mr. Li and four others  two professors from UTA and two from the University of Texas at Dallas  are collaborating on a project titled "Bot vs. Bot: Automated Detection of Fake News Bots," and they have a one-year grant of $30,000 in seed money from the University of Texas at Austin's Texas National Security Network Excellence Fund to get started.

"This is a seed grant that we hope will lead to a much larger grant that will identify these bots for social media users," Li said. "Right now, you don't know what is coming from a real person and what's coming from a computer, sometimes for malicious, or at least, misleading reasons."

Previously, Li and other colleagues partnered with Stanford and Duke universities to develop ClaimBuster, a fact-checking service developed from a $241,778 grant from the National Science Foundation. ClaimBuster works by letting users type in what they've heard in the news, and the results will produce a sliding scale of accuracy. The lower the number, the less accurate the reports.

The site also has transcripts of all the 2016 presidential debates and heavy documentation of its methodology.

Li and his computer science/engineering colleague Christoph Csallner will apply data mining techniques, coding analysis and other security measures to design an algorithm to spot fake news, with an assist from Mark Tremayne, an assistant professor of communication, and others who come from a journalism background.

UTD associate professor of computer science Zhiqiang Lin and Angela Lee, UTD assistant professor of emerging media and communication, are also part of the project.

The joint effort between the two universities will focus on false accounts spread via Twitter.

"We're not talking about the [Donald] Trump definition of fake news," Mr. Tremayne said. "Trump's definition of fake news is CNN, The Washington Post, The New York Times. We're talking about the pre-Trump definition stories that have been intentionally passed around with the intent to mislead."

The researchers in North Texas aren't the only ones seeking to identify purveyors of phony information. Melissa Zimdars, an assistant professor of communication at Merrimack College in Massachusetts, developed a checklist of fake news sites shortly after President Trump defeated Mrs. Clinton in the November election.

"I think the most troubling aspect of fake news and the proliferation of misleading information is that it further destabilizes the relationship between individuals and the press as well as between individuals of different political ideologies," she said.

Ms. Zimdars created her checklist for her students after she kept running across false sources cited in their papers. She also realized that even some of her professionally trained colleagues couldn't tell the difference between credible news sources and misleading ones.

She temporarily took down her checklist after she became the target of harassment, Zimdars said, but made it public again after the attacks against her eased.

It remains a live document, but Zimdars no longer updates it. It includes more than 1,000 sources that spread malicious or unreliable information, were satirical or relied on click-bait headlines to capture attention.

"There are plenty of actual things about which to disagree without having to consider alternative truths in the equation," Zimdars said. "How can we function as a society if we're not even sharing or at least understanding some of the same reality?"

Zimdars said readers can get a head start on spotting fake news sites by looking at domain names, such as the "8006" that appears at the end of an otherwise legitimate-looking fake New York Times site, or a "co" that comes after ".com" on sites that otherwise borrow the names of legitimate news outlets.

Li and his partners aren't sure what shape their program will eventually take.

"One form can be a browser plug-in that can tell you something about the truthfulness of something, or it could be a third-party bot or an app or something," Li said.

If the yearlong period ends and the grant isn't renewed, Li said the team will continue to work on the project in classrooms and laboratories.

Research will really start to take shape when students return in the fall, Tremayne said. The group is considering organizing a "hack farm" as a way to attract students to the project.

"The idea is, can we come up with some code to identify fake news bots?" Tremayne said. "Even if it just means something like throwing ideas at the wall and seeing if anything sticks."

Source: csmonitor