My research delves into the intricate dynamics of climate-related facts and beliefs, focusing on the origins of misinformation within the fluid landscape of scientific evidence and evolving public perceptions. By questioning the interactions that foster online climate misinformation, I investigate the relationships and mechanisms behind its emergence.
One approach I use is through detailed case studies, such as that of Chemtrails—visible cloud formations often misconstrued as evidence of geoengineering—I analyze how misinformation first arises. My study highlights the critical role of situational context in intertwining climate beliefs, physical and digital encounters with cloud formations, and the affordances of digital technology in generating online misinformation. I uncover how repetition and replication of misinformation are significantly shaped by prior physical phenomena and visual content's contextual backdrop.
In this stream, I (i) explore the reasons behind the systematic and unfair treatment of protected classes by intelligent algorithms, proposing strategies for technology design and management to mitigate such biases and their impacts. and (ii) address the cybersecurity, privacy, and IT failure risks associated with Artificial Intelligence (AI), which threaten its value and performance in organizational settings. Recognizing the gap in current research, which largely concentrates on technical mitigations, my work explores organizational strategies to counter these risks.
I investigate governance dynamics within platform ecosystems, adopting a comprehensive view to examine how external environmental factors influence platform and actor behaviors in reinforcing platform boundaries. Platform governance is a well-established research area, but the influence of societal norms, regulations, and external competitive pressures on it has been underexplored. These factors can significantly affect platform policies, impacting issues like misinformation and bias on platforms. Using fuzzy-set qualitative comparative analysis (fsQCA), I develop models that elucidates the relationship between external conditions and platforms' boundary reinforcement strategies.
Participated in workshops and roundtable talks discussing my experiences as a PhD student.
Presented my work on boundary reinforcement in platform governance to a QCA-methodology focused audience.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.