Research in Progress: AI transparency, responsible and sustainable social media

Zhixin (Giselle) Pu and Mike Schmierbach

By Zhixin (Giselle) Pu and Mike Schmierbach, Penn State University

Social media have become common sources for daily news consumption, and the issue of sustainable credibility and utility for social media has been brought into focus. Previous research tells us that being transparent about algorithms is one solution to improve social media sustainability. It clarifies the responsibility of social media and helps provide a reasonable content curation service to users.

We know, thanks to Eslami et al, that being transparent about algorithms changes users’ attitudes toward algorithms, either continuing to use them or leaving them.

However, the algorithms of social media largely remain a black box, within which the specific criteria and processes remain unknown to users. Thus, responsible and sustainable social media becomes difficult to achieve. Social media are expected to be new stakeholders of content and are responsible for the content. But it is unclear why users are getting certain news in their news feed rather than others. Users may want to discontinue the use of social media due to the opaqueness of content recommendation rules.

By being transparent about the algorithms, social media can convey information about how the content recommendations are made, and why users are seeing the content recommended to them. Correspondingly, users could alter their news feed to avoid or select certain information or make a better judgment to regulate their information flow. But the conundrum is how to convey the information to maximize the understandability of algorithms that appropriately address users’ concerns and information needs.

Studies have provided the framework of transparency instruments (van Drunen et al.) and found that when the algorithm is transparent (Kim & Moon), people find algorithms to be more trustworthy and useful (Shin). However, studies about the rules of algorithms are still necessary, especially about how transparency is essential to ensure a satisfying user experience.

Our project answers the question of how to be transparent about the algorithms and how transparency of algorithms fosters the sustainability of social media. The project examines how to achieve the social responsibility of social media by looking at the transparency of algorithm rules and user behaviors. We will also investigate whether the awareness of the algorithm contributes to deliberate information consumption towards heterogeneous content and a positive user experience.

With our research, we hope to illustrate the theoretical value of transparent, responsible and sustainable social media. We will also be able to provide practical advice to social media regarding how to increase transparency and undertake the responsibility of offering a diverse news diet to users.

Our project furthers the mission of the Page Center by covering one of the topics of special interest—the examination of corporate social responsibility communication. The practical applications regarding how user preference enhances users’ news diet would be relevant to practitioners’ social media practices. Practitioners would apply the findings to their social media campaigns to receive more attention and click rates. Social media users could also benefit from the project by learning how to understand the practice of social media.

This project is funded by the Page Center. For further information on this study, please email Giselle Pu at zhixinpu@psu.edu. Results will be available next year.

Topics: