The Faculty of Arts and Social Sciences (FASS) and the Department of Information Systems and Analytics (DISA) at the National University of Singapore held the inaugural Global Research Forum: The Fruition and Challenges of Computational Social Science from 23 to 25 August 2021. The forum explored the field of computational social science, which is the integrated, interdisciplinary pursuit of social science inquiry with emphasis on data or information processing and through the methods of advanced computation. The event sought to promote interdisciplinary research collaborations in computational social science among academia, industry, and policymakers in Singapore, Asia, and globally.
Professor Alex (Sandy) Pentland (Massachusetts Institute of Technology) delivered the forum’s first keynote, ‘Computational Social Science: A Field Whose Time Has Come’. He spoke about the importance, opportunities, and challenges of extending the field of traditional social science to include computational social science. Prof Pentland discussed, among other aspects of this new field, how the availability of vast amounts of digital data has made it possible to obtain accurate predictions about human behavior that can be put to beneficial use in policymaking and other areas to improve society, and also how this availability can be abused to cause harm, such as violate privacy and commit cyberattacks.
Prof Pentland discussed how we should adapt to innovations in big data and AI. He pointed out that people are very concerned about these issues and that there is also a lot of hype about them. One of the problems with AI is that most people do not know what exactly it does. Prof Pentland advised that we treat these technologies the same way we treat other technologies in our daily life such as car technology, electric technology, or medical technology. If something goes wrong with medical technology, the effects are usually evident; the patient may die, for example, and the makers of the technology could be held accountable. He explained why we need to bring similar accountability to AI systems and how he promotes record keeping among those who work on AI systems, particularly of the effect of the system on its targeted stakeholders. This would lead to quicker and potentially immediate detection of bias or unfairness perpetuated by AI systems.
Prof Pentland also mentioned that innovation depends on access to different sorts of information, opportunities, and skills, but policy relating to innovation is primarily economic and doesn’t serve to improve it. A purely economic policy focus misses the dynamism in society that contributes to growth in income and culture. He emphasized the importance of building trusted reciprocal relationships in order to change behaviors and improve performance. Prof Pentland added that we need to shift away from a society that is governed by outdated prejudices and categories, and become savvier about measuring things – using those measurements to better govern ourselves.
Elaborating on his views on fabricated data, open science, and researchers’ accountability, Prof Pentland noted that when you are dealing with very large data sets generated by a company, you can make public aggregated data, but you can’t make public anonymized data, because you can’t risk violating the privacy of individuals. He explained that this makes it harder to check the correctness of the data. Prof Pentland added that he believes in open science and his strategy to reduce the risk of fabricated data is to have multiple people in his lab work on different projects with the same data. He believes that governments ought to require that a lot of this aggregated data be made public, similar to how census data is publicized, which will help neighborhoods better govern themselves, and governments make better decisions.
Professor Cuihua (Cindy) Shen (University of California, Davis) delivered the second keynote, ‘Combating Multimodal Misinformation in Online Networks’. She showed how misinformation research can be divided into six categories: identification, dissemination/engagement, message, modality (text, image, audio, video), user susceptibility, and impact. Prof Shen explained that multimodal misinformation is of key importance because people are increasingly consuming information in multimodal formats. She added that when we study multimodal misinformation we must be aware that we process and perceive visual misinformation quite differently than textual information, since visual data is easier to remember and share, and is more persuasive. Moreover, people lack the visual ability and technical tools to tell real and fake images apart. Prof Shen then shared some studies her lab has done on people’s susceptibility to multimodal misinformation.
Prof Shen discussed proof checking of videos for mass consumer use. She explained that technical solutions are difficult because most videos have been edited in some way, so it is difficult for a technical tool to determine whether and to what extent a video is ‘fake’. This is also true, to a lesser extent, for images. In addition, any high quality proof checking tools that have been developed are currently unavailable for regular consumer use. Prof Shen advised that we ordinary consumers improve our digital media literacy to guard against being fooled by media that misinforms, and advocate for the media organizations and platforms to take on more responsibility to root out misinformation. She elaborated that policymakers and educational institutions need to invest more in boosting and advocating for increased media literacy to fight ‘fake news’, cautioning that we cannot stop at boosting the public’s media literacy and must take a multi-pronged approach to ensure that media producers and platforms are not let off the hook.
Stay tuned for more from the Global Research Forum!