PhD Studentship Opportunity
Examining the ethical implications of natural language processing
The Edinburgh Futures Institute’s Centre for Technomoral Futures and the School of Informatics are delighted to invite applications for this PhD studentship, funded by Baillie Gifford, to begin in the academic year 2025/2026.
This studentship, which is open to UK, EU and international applicants, will support rigorous interdisciplinary PhD research into the ethical challenges posed by the growing use of natural language processing and artificial intelligence.
Application Deadline: 15 March 2025
Supervisors:
Dr Zee Talat, School of Informatics
Secondary supervisor, to be confirmed
The Project:
The aim of this project is to perform research in natural language processing (NLP) towards identifying and exploring methods for the ethical development of NLP tools. Given the increased uptake in the use and proliferation of NLP technologies, questions surrounding the ethical and responsible development of such tools are of greater urgency. To this end, this project will seek to examine the current ecosystems and practices for the development of language technologies, proactively develop technologies for examining NLP tools and their social ramifications, or design NLP technologies that encourage ethical practices. Through these areas, the project seeks to identify the ethical challenges faced by NLP and approaches to address them. For example, the project could:
Investigate how NLP and machine learning contributes to inequities in society
Show how end-users of NLP technologies can benefit from personalisation and privacy preservation methods
Investigate why current methods for addressing social biases in NLP and machine learning models fall short, and identify remedies for such shortcomings
Examine NLP and machine learning literature for the values that underpin research in the field
The project will consider how NLP and machine learning are currently falling short in engaging with ethical development practices (such as those suggested in the indicative list above) and examine how such practices can be improved for the benefit of end-users and society more broadly.
The project will make explicit the shortcomings of current methods in machine learning and research in the field of fairness and ethics of NLP and machine learning, and propose social, infrastructural, or technical mechanisms that can limit the harmful ramifications of applying current methods in NLP and machine learning in social contexts. It will contribute to the discourse on ethics and fairness in NLP and machine learning by considering in detail the promises made by AI-related fields, and how and why they are not met.