Interview with Sanjaikanth E Vadakkethil Somanathan Pillai: A Pioneer in Misinformation Detection and Ensemble Learning Techniques

Sanjaikanth E Vadakkethil Somanathan Pillai

Follow Us:

In this exclusive interview, we speak with Sanjaikanth E Vadakkethil Somanathan Pillai, a Senior Member of IEEE and a Fellow of IET, globally renowned for his expertise in misinformation detection, secure artificial intelligence, and network security. With over 19 years of experience in the IT industry, Sanjaikanth currently holds the position of Staff Site Reliability Engineer at Visa Inc. while pursuing his PhD in Computer Science at the University of North Dakota.

His groundbreaking contributions, particularly in misinformation detection using ensemble learning techniques, have received international recognition, with his work being adopted by researchers worldwide. Sanjaikanth has published over 50 papers in esteemed journals and conferences and holds several patents to his name.

In addition to his extensive professional and academic achievements, Sanjaikanth plays an active role in the academic community, serving on the editorial boards of prominent journals, such as the International Journal of Artificial Intelligence in Scientific Disciplines and the Information Resources Management Journal. He has also co-authored significant books like ‘ChatGPT: Transforming Industries Through AI-Powered Innovation,’ contributing to advancements in AI, IoT, and machine learning.

We are excited to hear more about his work and insights in this interview!

Interviewer: Thank you, Sanjaikanth, for taking the time to speak with us today. Could you start by telling us about your recent work on misinformation detection and how it contributes to the ongoing battle against fake news?

Sanjaikanth: Thank you for having me! My recent work focuses on developing a system that uses ensemble learning methods to detect misinformation, particularly fake news. With the overwhelming amount of information circulating on the internet and social media, it’s crucial to have reliable systems that can differentiate between true and false information. Our system combines traditional machine learning methods like bagging, boosting, stacking, and recurrent neural networks (RNN) with sentiment and emotional analyses to improve the accuracy of misinformation detection.

Interviewer: What motivated you to explore the inclusion of sentiment and emotional analysis in misinformation detection?

Sanjaikanth: The idea stemmed from understanding that misinformation often triggers emotional responses, such as anger or fear. By incorporating sentiment and emotional analysis, we can detect the underlying emotional tone of the content, which enhances our system’s ability to accurately classify misinformation. We found that adding these classifiers led to a significant improvement in the accuracy of our model, as misinformation tends to evoke strong emotions compared to neutral or truthful information. 

Interviewer: How does the rise of deepfakes and AI-generated content complicate misinformation detection, and how can AI models adapt to these challenges?

Sanjaikanth: Deepfakes and AI-generated content represent a significant challenge in misinformation detection because they are becoming more sophisticated and harder to detect with the naked eye. Traditional models that analyze text for sentiment and emotion aren’t enough to combat these types of misinformation. To address this, AI models need to incorporate more advanced techniques like deep learning algorithms for image and video recognition, as well as multimodal analysis that looks at both the textual and visual aspects of content. There is also a growing need for collaboration between AI researchers and policymakers to establish regulations for AI-generated content, which could help control the spread of these sophisticated fake media. 

Interviewer: With the widespread use of large language models like ChatGPT, do you see a risk of misinformation being generated through these tools, and how should that be handled?

Sanjaikanth: Yes, large language models like ChatGPT have the potential to generate misinformation, either unintentionally or deliberately if misused. It’s a double-edged sword because while these models can assist in a lot of positive ways, they also need mechanisms to ensure the information they produce is accurate and reliable. To address this, we need ongoing advancements in the ethical use of AI and more robust content filtering systems that can detect when a model generates misleading or incorrect information. Additionally, responsible AI deployment practices, where models are constantly monitored and updated with correct information, can help mitigate this risk.

Interviewer: Can you explain how your ensemble method differs from other traditional machine learning techniques in detecting misinformation?

Sanjaikanth: Ensemble learning is all about combining multiple models to improve performance. Instead of relying on a single algorithm, we use various methods like bagging, boosting, and stacking. Each of these models is trained on the same dataset but with different approaches, allowing them to complement each other. This leads to higher accuracy because each model corrects the weaknesses of the others. In our research, we found that ensemble learning methods, when combined with RNN and sentiment analysis, outperform standalone models.

Interviewer: What challenges did you face when developing this misinformation detection system?

Sanjaikanth: One of the main challenges was dealing with the diversity of misinformation. False information can come in many forms, from subtle inaccuracies to outright falsehoods. Another challenge was processing large datasets, as we worked with over 45,000 records of news articles. Preprocessing, such as data cleanup, stopword removal, and tokenization, was essential to make the dataset suitable for machine learning models. Also, tuning the ensemble learning techniques to balance the strengths of each model was a complex task.

Interviewer: You mentioned earlier that you used a dataset of 45,000 news records. How did you ensure that the data was representative of real-world scenarios?

Sanjaikanth: We collected the dataset from various sources on the internet, and it consisted of an equal proportion of true and fake news. The key was to capture a wide range of topics, from politics to health, which allowed the system to generalize better. One limitation was that the dataset did not include the most recent news, but we plan to address this in future iterations by constantly updating the dataset to reflect current events.

Interviewer: How does your system handle rapidly evolving misinformation, such as during a pandemic or political crisis?

Sanjaikanth: Misinformation tends to spread faster during crises like pandemics, and it evolves quickly. Our system is designed to be adaptive, meaning it can learn from new data as it becomes available. We plan to implement real-time detection in the future, where the system continuously trains on incoming data, such as social media posts, to keep up with evolving narratives.

Interviewer: What are the real-world applications of your misinformation detection system?

Sanjaikanth: The system can be used by media outlets, social media platforms, and government agencies to identify and flag fake news in real time. It could also help in health sectors by identifying misinformation related to public health, such as false remedies during a pandemic. Furthermore, businesses could use it to monitor information about their brand and counter false rumors that could harm their reputation.

Interviewer: Misinformation spreads quickly on social media. How does your model differentiate between user-generated content and professional news sources?

Sanjaikanth: That’s a great question. In our current system, we focus on analyzing the content of the text itself, regardless of the source. However, we are looking into ways to weigh the credibility of the source as well. Professional news outlets often follow ethical guidelines, while user-generated content can be more prone to misinformation. Assigning weight to the authenticity of the publisher is one of the features we plan to implement in the future to improve the system’s accuracy.

Interviewer: Where do you see the future of misinformation detection heading? Will AI play an even larger role in combating fake news?

Sanjaikanth: Absolutely! AI and machine learning will continue to play a pivotal role in detecting misinformation. The challenge is that fake news is getting more sophisticated, so we need to stay ahead by developing models that can adapt and learn in real-time. The future lies in creating systems that are not only accurate but also transparent and explainable, so users understand why certain information has been flagged as false. I also foresee more collaborations between AI developers, social media companies, and governments to create robust frameworks for misinformation detection.

Interviewer: Finally, what advice would you give to young researchers and professionals looking to enter the field of misinformation detection?

Sanjaikanth: My advice would be to stay curious and committed. This is a rapidly evolving field, and there’s always something new to learn. Focus on understanding the basics of machine learning and data processing, but also pay attention to the ethical implications of your work. Misinformation is not just a technical problem; it has real-world consequences. Developing systems that not only detect misinformation but also respect user privacy and rights is key. Keep experimenting with new models and techniques, and don’t be afraid to explore interdisciplinary approaches.

LinkedIn account 

https://www.linkedin.com/in/sanjai-kanth-012a6922

Also Read: Exclusive Interview with Kiran Polimetla: Thought Leader on the Future of Artificial Intelligence, Big Data, and Industry Innovation

Subscribe To Our Newsletter

Get updates and learn from the best

More To Explore

Scroll to Top

Hire Us To Spread Your Content

Fill this form and we will call you.