Research

My doctoral research, conducted at the Université Polytechnique Hauts-de-France (DeVisu Laboratory, LARSH), examines how algorithmic systems reconfigure organisational communication, knowledge production, and the distribution of power within digital environments. It is anchored in Information and Communication Sciences (ICS), a discipline that treats technologies not as neutral instruments but as socio-technical configurations embedded in social logics, power relations, and organisational rationalities.

Core Research Question

The central question driving this research is: how does AI transform the structure of tacit knowledge detention and circulation in organisations?

Artificial intelligence is often framed as a managerial optimisation tool. My work reframes this: AI is not merely a tool for managing knowledge – it is an epistemological device that restructures what counts as knowledge, who holds it, and how it circulates. When organisations deploy algorithmic systems, they do not simply automate tasks. They redistribute cognitive capital, reconfigure communicational authority, and alter the conditions under which collective meaning-making becomes possible.

Theoretical Framework

The research mobilises a multi-layered theoretical architecture that reflects the interdisciplinary character of ICS.

Power, Knowledge and Algorithmic Mediation

Drawing on Foucault’s power/knowledge dynamics, the research analyses how algorithmic systems function as dispositifs that produce new regimes of truth within organisations. AI does not simply process existing knowledge – it generates new categories of the knowable and the unknowable, reshaping what organisations recognise as legitimate expertise.

Castells’ network theory provides a complementary lens for understanding how algorithmic mediation reconfigures the topology of information flows, creating new centres of power while marginalising others.

Tacit Knowledge and Organisational Epistemology

Nonaka and Takeuchi’s framework on tacit and explicit knowledge is central to the analysis. A key argument of this research is that AI produces impoverished formalisations of tacit knowledge – reductions that become organisational norms. Workers experience what I describe as a double dispossession: the loss of their knowledge monopoly, and the simultaneous delegitimisation of the tacit knowledge that remains.

Ubuntu Ethics as an Analytical Framework

Ubuntu – the African philosophical principle that a person is a person through other persons – is not a cultural add-on in this research. It functions as a rigorous post-colonial analytical framework that differentiates this work from mainstream AI governance discourse. Adapted from Mbigi’s vision, Ubuntu is operationalised through four interdependent dimensions: Survival, Solidarity, Compassion, and Respect & Dignity. These dimensions serve as evaluative criteria for assessing the relational and ethical qualities of AI-mediated communication.

Our recent research has demonstrated that Ubuntu reveals what standard AI evaluation metrics (BLEU, ROUGE, Perplexity) cannot capture: the structural marginalisation of collective interdependence – particularly Solidarity – in algorithmically mediated discourse.

Original Frameworks

This research has developed two original methodological tools for assessing GenAI-assisted communication.

STARA (Situation, Task, Action, Result, Self-Assessment) is a structured prompt model designed to generate comparable simulated interview outputs across different LLMs. It serves as generative scaffolding for road-testing interview guides and eliciting meta-level self-assessment from AI systems.

STRING is a post-interaction analytical framework that combines qualitative interpretation with quantitative scoring to evaluate three dimensions of response quality: coherence, plausibility, and relevance. It integrates conversational analysis with an Ubuntu-informed ethical lens and visual mapping of tensions between values and organisational dimensions.

Together, STARA and STRING offer ICS researchers a structured approach to assessing GenAI-assisted interviews – one that complements technical metrics with the kind of ethical reflexivity those metrics cannot capture.

Current Research Directions

Communicational regimes of LLMs. A comparative analysis of ChatGPT, Gemini, and Claude has revealed that each model produces a distinct communicational regime: ChatGPT stabilises meaning through logical closure, Gemini structures through systemic abstraction, Claude opens through reflexivity and ethical ambivalence. These are not merely stylistic differences – they reflect differentiated socio-technical mediation architectures.

Worker resistance and counter-conducts. An emerging direction examines how workers respond to AI-driven knowledge extraction: through knowledge retention, deliberate misinformation to AI systems, and the creation of new tacit knowledge domains post-implementation. These strategies are analysed as Foucauldian counter-conducts within algorithmic organisations.

Ubuntu as organisational design principle. Beyond its analytical function, Ubuntu is being explored as a framework for navigating organisational tensions between technological rationalisation and relational ethics – what I call Ubuntu by Design.

Institutional Affiliation

Laboratory: DeVisu – Laboratoire en Design Visuel et Urbain (LARSH)

Institution: Université Polytechnique Hauts-de-France (UPHF)

Discipline: Information and Communication Sciences (ICS)

AI does not just automate knowledge. It reconfigures who knows, what counts as knowing, and whose knowledge matters.