A Comment on “Living Robots”

In mid-January 2020, it was reported that scientists created the first “living robot” which represents a new class of biological artefact in the form of a “programmable organism” according to headlines in the Independent and Wired. Elsewhere, the ability of these new entities to “walk” was highlighted by the Guardian, opening up possibilities for potential advances of the technology for “drug delivery” and “toxic waste clean-up” as highlighted by Science Daily, all of which invites new “opportunities and risks” according to Forbes.

The work itself bears the title “A scalable pipeline for designing reconfigurable organisms”, hence, it should be noted that the authors have already established ambitions of scalability and rapid manufacturing from the outset. However, the paper casts the concrete accomplishments alongside promises of automation and scalability of the design pipeline, which forecloses the opportunity for a measured and proportionate public discussion that might challenge the basic presuppositions of the research programme itself by scrambling and refocussing the attention of  commentators and the public to the more speculative claims made by the authors. Hence, ethical considerations are effectively neutralised and relegated to an afterthought while research programmes such as these proceed unhindered by scrutiny at the appropriate conceptual levels.

In this brief comment, I aim to draw attention to a few underlying conceptual and metaphysical themes by directly engaging the substantive claims of the authors to answer questions such as in what sense does this novel entity qualify as living?

First, the novelty of the work is in the pairing of two otherwise distinct areas of research, namely machine learning and evolutionary computation, and stem cell technology, which have both been the subject of significant ethical discussion in recent years. First, a machine learning technique based on an evolutionary algorithm is used to produce novel designs for the biological artefacts, followed by supercomputer simulations of the prospective designs in artificial worlds to select for certain characteristics, and finally given actualisation in the physical world using the “stuff” of life itself using cells derived from a certain frog species based on stem cell technology.

Hence, in order to bring the artefacts into actualisation, one of the key stages involves a manual process of cleaving of the cells into novel formations. These agglomerations of cells constitute unitary entities, which have been called “xenobots” to account for their strange origins and the fact that they exhibit novel behaviours that are not associated with the organs or organisms from which the cells have been derived.

These novel entities exhibit some rudimentary movement for a few days until the cells eventually perish. This movement has been described as “walking” and the authors have described it as “exploring” its aqueous environment. However, is this an appropriate description for an insensate agglomeration of cells – is it exploring or merely moving haphazardly?

Here it is worth noting that the cells in this case are cardiomyocytes (heart cells) derived from a species of frog. Each heart cell produces its own movement and together emanate “contractile waves”, and by virtue of the self-organising properties of the cells the overall agglomeration of cells exhibits “emergent spontaneous coordination” among the individual cells to produce “coherent, phase-matched contractions”. In other words, the bundle of cells has some aggregate movement based on the way passive and beating cells have been layered together, which is hardly surprising. Hence, it is important to note that any features of life, such as movement and self-healing, that this new type of entity possesses are derived from processes that are already part of the living cells from which it is composed.

Hence, rather than representing an entirely new living system, the type of life it possesses is jumpstarted from fertilised eggs of an already existing living system, that is, it is the product of biological bootstrapping.

The Guardian article on the work speculates that, “Xenobots might be built with blood vessels, nervous systems and sensory cells, to form rudimentary eyes. By building them out of mammalian cells, they could live on dry land.” However, the original research paper and the supplementary material only goes so far as to speculate about the possibility of scaling toward the inclusion of “organ systems”, the possibility of equipping them with “reproductive systems” and “metabolic engineering” to expand their lifespans.

It is in relation to this possibility of investing future versions with biological structures possessed by sentient lifeforms that the question of the moral significance has arisen, in particular, is there some threshold in the complexity and variety of biological features at which point these entities qualify as moral patients, and therefore must be treated in a way that is subject to moral principles and evaluations? That is, by continuing to augment the xenobots with additional biological structures and systems, would there be a point at which their moral status becomes a serious question, especially if they begin to exhibit complex, autonomous behaviours?

There is, however, an underlying moral issue that is neglected by this line of thought, which arises from the fact that the cells are already in the chain of life and taken from sacrificial host organisms, thus favouring the artificial assemblages over existing life as given in nature.

Besides the moral and ethical considerations related to these new entities, it may be argued that as novel biological artefacts, they raise interesting ontological issues about how to categorise them between the living and unliving, the organic and inorganic or indeed between the natural and artificial. These new entities thus represent examples of what may be described paradoxically as “artificial nature”, a paradox also expressed by the term “artificial life” (AL), the name of the research field itself

Unlike other areas of the life sciences, AL seeks to understand the processes of life by simulating the conditions of its emergence and development computationally. Hence, rather than seeking to understand the phenomena of life as they are found in nature, as is the case in most other areas of the life sciences, typically by reductive decomposition of complex living systems and analysis, AL seeks to understand life from the standpoint of being its maker. Hence, there is a significant epistemological issue to be addressed in the pursuit of “maker’s knowledge”, since it is not life as-it-is that is known, but that which we ourselves have made.

The question of life itself remains a mystery and the rhetoric accompanying such work appears to impart powers to scientists that they do not actually possess, namely, the power to bring new forms of life into existence. From a theological perspective, power over the creation of life resides entirely with God. The concept of life is of central importance and mentioned throughout the Qur’ān, most prominently in the name of God as “The Living and Everlasting” and in ascribing the creation of the phenomena of life and death to God. Furthermore, there are several instances in the Qur’ān where the act of bringing something to life occurs through the hands of Prophets. In particular, Ibrahim, alaihi-Salām, asks God to show him how the dead are brought back to life, and it is mentioned that Isa, alaihi-Salām, brings a statuette of a bird to life by way of his breath, with the permission of God.

In the case of AL, the soul itself is reconceived of as an informational entity. More specifically in the case of xenobots, different designs are given an information representation to be simulated in an artificial world, which is inherently stripped of any quality that does not admit quantification, such as colour, odour, purpose, meaning, and most importantly life. This informational cast of being is then re-inscribed onto the stuff of life itself, and thus described as a “living system” or a “novel lifeform”. Hence, the part of the new entity that is truly living is negated and re-purposed (i.e. re-programmed) according to ends determined by the scientists to become something as, “orthogonal as possible to existing species, yet capable of being built from existing cell types.”    

Scientists are always seeking to open new frontiers for discovery in our present life, and indeed mind, have become the frontiers for expansion. Such expansion entails a concomitant expansion in the range of ethical and moral consideration, since, in contrast to pre-modern times, human actions now set in motion chains of events with their outcomes in the distant future and now frequently involve interfering in processes that are not fully understood. However, unintended consequences are explicitly sought after in the field of AL since a measure of success and progress in the field is based on the extent to which AL creations exhibit unpredicted behaviours.

Knowledge in modern science is not concerned with the contemplation of the essences of things but power over them, and the form of power operative in this case is the power of control and domination over living matter. Speaking to the Wired correspondent, one of the authors describes how the cells produce complex behaviours and raises the question, “And most importantly, how we can control it.”

The Spectre of High Frequency Advertising and Attention Trading

Yaqub Chaudhary[1], 9th January 2019

Submitted in response to the CDEI call for evidence on Online Targeting on 14 June 2019.

In 2018, global spending on ads in all media has been estimated to be $628.63 billion, with digital media accounting for 43.5% of the investments.1 Shoshana Zuboff coined the term “surveillance capitalism” to describe an entirely new model in the history of capitalism2, where human activity is harvested for behavioural data for predictive advertising, which may lead to the creation of a new financial asset for trading in “behavioural futures markets.”

Given this connection with financial markets, in this essay I argue that the rise of High Frequency Advertising (HFA) and High Frequency Attention Trading (HFAT) are likely future aspects of what has been described as the “attention economy”, which continues to become more efficiently structured to eliminate thought from consumption by short-circuiting “the process of reflection that stands between one’s recognition of a desire and its fulfilment via the market.” 3

I argue that HFAT will come to bear a resemblance to High Frequency Trading (HFT) as different parties vie for monetizable impressions and attention on ad exchanges. Briefly, HFT strategies leverage very low latency networks, high performance computation and sophisticated algorithms to exploit sub-millisecond fluctuations in asset prices before the rest of the market.

There has been a surge of interest in algorithmic or programmatic approaches to advertising since 2015, which is the latest major development in online advertising since “real-time bidding” (RTB) became a trend on ad exchanges from 2009 onwards. Ad exchanges were in many ways modelled directly after financial stock exchanges except it is ad impressions rather than financial assets that are traded.4

Figure 1 The complex network of feedback loops involved in online advertising via ad exchanges that provide real-time bidding (copied from source under a CC license) 7

Programmatic advertising in an RTB system, involves audience identification, the auction and ad display, which take place between 10 to 100 milliseconds from when a web page or app with an ad container is loaded on the user side. Ad exchanges offer bidding on impressions as they are being generated, which can therefore be based on a high granularity of behavioural and contextual data that improves ad targeting.5,6 Programmatic advertising has been met with much enthusiasm by those in marketing since it allows millions of specific marketing decisions to be made based on real-time information.

An example of bidding by impressions is Snapchat’s “goal-based bidding”, which allows advertisers to bid on engagement metrics, such as swiping, rather than just impressions. In this case, machine learning optimises ad delivery for specific advertising campaign objectives to place curated advertising into the slipstream of a user’s personalised newsfeed.

Several studies in 2018 have shown the extent of user tracking and data collection, without user awareness. Across Google’s various applications and platforms, activities, locations, routes, purchases, personal data, playlists and app usage are collected. Furthermore data was found to be sent to Google’s servers even without user interaction 8. Other studies highlighted the extent of advertising in children’s apps.9 and an empirical study of almost one million apps found third party tracking in 90.4% of mobile apps in the study.10 The most recent study found 61% of apps on Android automatically transfer data to Facebook from the moment an app is open, whether or not the user has a Facebook account.11

These studies illustrate how it is possible for numerous analytics firms to now boast of the ability to track activity and usage patterns across multiple devices and follow each users “journey” through digital content networks. There are now hosts of companies offering services for marketing and advertising with arrays of features to track and manipulate user behaviour.

In a recent essay, Jamie Bartlett describes a vision of the future of advertising in 2069 in which psychographics will be part of a cluster for techniques for total personalisation with “ad targeting so effective that you may well question the notion of free will altogether.”12 Here I argue that this future is much closer than anticipated and major features of this “vision of the dark future of advertising” are already present and being operationalised.

These developments stem from the unprecedented opportunity for “mass-interpersonal persuasion” identified by BJ Fogg just over a decade ago based on the convergence of new internet related technologies and developer platforms offered by social network sites. Fogg argued that a number of factors were first combined when Facebook provided a platform for third-party developers to create applications linked to their social network, which paved the way for developers to use persuasive strategies for advertising , and user interface and experience design by leveraging a new combination of persuasive experience, automated structure, social distribution, rapid cycling, large scale social networks and measured impact.13

The provision of these platforms and tools for developers by social network providers thus enabled the creation of more compelling forms of “persuasive technologies” by allowing software developers and engineers to incorporate techniques in applications that are directly derived from behavioural psychology. Future technologies will continue to provide enhanced characterisation of psychological states based on access to more granular data on individual environmental and behavioural patterns.

Recently, Brett Frischmann and Deven Desai discussed one mechanism by which feedback is used to control individual behaviour to conform to predictable patterns. For example, on a social network site this may be achieved by stimuli such as messages and content that induce simple responses of clicking and scrolling, which are fed back into a personalisation system that continuously adjusts the content to stimulate further engagement, activity and attention of the user. The overall structure of the programmatic feedback loop generates homogeneous responses from diverse ranges of user groups, which benefits the host site or app by drawing in more personal data and channelling activity towards affiliates or advertisers.14

This is an example digital control of an analogue system, or more specifically, the use of algorithmic feedback to drive a target black box system (human users) towards a steady-state condition, which, in this case is achieved programmatically by highly engineered scripts.

It is crucial to understand that digitally connected networks provide bi-directional flows of information over an unprecedented channel of communication between individuals and concentrated computational clusters. New devices and communication standards, such as 5G, aim to continually increase the throughput and decrease the latency of the channels between nodes in the network, which enables the use of increasingly sophisticated techniques to modulate behaviour, or even digitally alter individual perceptions of reality.

Figure 2 Bandwidth and latency requirements for different 5G use cases (Source: GSMA report)15

One of original ambitions of 5G was to offer an order of magnitude reduction in end-to-end latency, with a technical requirement of sub-1 ms latency and greater than 1 Gbps downlink speed15, which has since been relaxed to less than 10 ms16.

Figure 3 Latencies in categorising complex visual stimuli in the mammalian (monkey) brain.17

These low latencies will allow bi-directional flows of data through ad exchanges and systems involved in the presentation of ads to take place at a latency that is an order of magnitude below the threshold of human conscious decision making of ~200 ms18–20, and faster than the refresh rate of typical device displays of 16.7 ms. Advanced networks, devices and sensors will therefore present new opportunities for more complex types of perceptual modulation of behaviour.

Barlett imagines a company emerging in future decades with a name like “iData” standing for “Individualized Dynamic Automated Targeting Applications” with the motto, “Know your customers better than they know themselves,” and that “will use natural language generation, a machine learning technique that works out what sort of message each person would best respond to and automatically creates it without any human involvement.

Services for “programmatic creative” and “responsive ads” are detailed by digital advertising agencies in their own marketing material.  One company boasts of over 1018possible weekly creative variations auto-generated for each campaign,”21 Another offers one-to-one video marketing and claims that “…we can construct the most persuasive one-to-one video message to motivate consumers to take action towards a purchase,” by “semantic video indexing” which indexes the narrative, contextual and emotional content of video using ML and machine vision to create millions of unique videos per minute matched to each individuals psychological profile, behaviour and context.22

The constant automated iteration, testing and refinement of advertising by Bartlett’s imaginary iData is already in full effect and digital ad agencies already provide these services to hundreds of the world’s largest firms. Here, Barlett makes a crucial point, “The more personal and intrusive the profiling, the more effective the ads, which also creates greater opportunity to manipulate and control.”

In Zuboff’s masterwork on “The Age of Surveillance Capitalism” 23she mentions Facebook’s machine learning “prediction engine” known as “FBLearner Flow” which, “ingests trillions of data points every day, trains thousands of models — either offline or in real time — and then deploys them to the server fleet for live predictions.” This system provides an experimentation platform for engineers to easily build models to trial new forms of real-time manufactured experiences and interactions on Facebook, beyond core features such personalised feed, highlighted topics and advertising. In mid-2016, the system was capable of making 6 million predictions per second.24 A key point is that Facebook’s machine learning inference is “online” making the firm’s lip-service to protecting customer data as vacuous and deceptive as their commitment to combatting disinformation on their services.25

Zuboff further elaborates on the concept of “behavioral futures markets”, where “prediction products are sold into a new kind of market that trades exclusively in future behaviour.” As a market place, “any actor with an interest in purchasing probabilistic information about our behaviour and/or influencing future behaviour can pay to play…

This combination of real-time goal-based building, real-time monitoring of “engagement” metrics, and dynamically generated advertising therefore provide the basis for a new form of trading in the market for real-time human attention, priced according to conversion to monetary value, and traded at high-frequency.

I argue that describing these tools as simply “persuasive technologies” mischaracterises the level of psychological manipulation that may be achieved through their use. In his 2018 book, Stand Out of our Light, James Williams describes these as systems of “adversarial persuasion” and “industrialised persuasion”26 yet also finds these to be inadequate descriptions of the phenomenon of individualised control and manipulation now possible.

Personal computing, and globally networked information technologies, have brought about a revolutionary transformation in nudging and persuasion, which have acquired new potency by the mediation of computer-based communication technology. This fact was the basis of BJ Fogg’s derivation of the term “captology” in 1996 from the acronym CAPT, standing for “Computers as Persuasive Technologies”. The term belies a second ambition, not made explicit by proponents, of being the study of computer technology for the capture of attention, and by extension, control of the captivated beholder.

It has been argued, since the early periods of the web, that its function is to create “a globally predictable consumer culture,27. It may now be said that this function has been fully realised with the digitally networked environment. Galloway argues, in his 2004 book, Protocol, that the internet should not be regarded as simply “open” or “closed” but rather as a form that is fundamentally modulated. Information flows through the network, but in a regulated manner, making it an inherently political technology. 28 Furthermore, the very term cyberspace conveys the meaning of being a controlled or governed space based on its relation to the term cybernetics, which is derived from the word “governor”, from the Greek kubernetes meaning “steersman”. 29,30

In the mid-20th century Bernays described the then modern forms communications, such as print, radio and television, as “a highly organised mechanical web” with potent force for social good or evil.31 He therefore envisaged an engineering approach towards the formation of public consent to governmental programs or goals. Now, rather than Bernays’ mechanical web, the fields of electronic engineering and information systems engineering have woven a silicon web that effectively places human will and motivation in a subordinate position as the target of control in a vast digital information processing system.

Names of the digital advertising agencies have been redacted since the author does not wish to promote their services.

Postscript & Recommendation

All forms of unsolicited communication (whether deemed to be in the public good or not) between centralised forms of power (state or private) to individuals should be untargeted, fully passive, unresponsive, and untethered from digital information communication systems and infrastructure.

Questions framing the issue of targeting as a discussion of benefits & risks/harms or positives and negatives represent false dichotomies since the very concept stands in opposition to human freedom, liberty, dignity and autonomy. Governments of today should implement the strongest possible legal safeguards against the use of such practices by themselves or any future government or organisation.

Governments and policy makers should not allow themselves to be misled by the persuasiveness of those whose wealth and power have been acquired through systems of industrialised persuasion. We are only a hop, skip and a jump, or a nudge, push and a shove away from a pit of serpents whose complex coils can wrap the minds of their captives in new forms of neuro-totalitarianism from which humanity will not be able to escape.

[1] Yaqub Chaudhary, MEng, MRes, PhD

Templeton Fellow of Science and Religion

Cambridge Muslim College

1.          McNair, C. Global Ad Spending – eMarketer Trends, Forecasts & Statistics. (2018).

2.          Zuboff, S. Big other: surveillance capitalism and the prospects of an information civilization. J. Inf. Technol. 30, 75–89 (2015).

3.          Greenfield, A. Radical Technologies: The Design of Everyday Life. (Verso, 2017).

4.          Muthukrishnan, S. Ad Exchanges: Research Issues. in Internet and Network Economics (ed. Leonardi, S.) 5929, 1–12 (Springer Berlin Heidelberg, 2009).

5.          Busch, O. Programmatic advertising: the successful transformation to automated, data-driven marketing in real-time. (2015).

6.          Sang, Q., Karlsson, N. & Guo, J. Feedback control of event rate in online advertising campaigns. Control Eng. Pract. 75, 126–136 (2018).

7.          Nagel. Ad exchange. (2019).

8.          Douglas C. Schmidt. Google data collection research. Digital Content Next (2018).

9.          Meyer, M. et al. Advertising in Young Children’s Apps: A Content Analysis. J. Dev. Behav. Pediatr. JDBP (2018). doi:10.1097/DBP.0000000000000622

10.        Binns, R. et al. Third Party Tracking in the Mobile Ecosystem. in Proceedings of the 10th ACM Conference on Web Science 23–31 (ACM, 2018). doi:10.1145/3201064.3201089

11.        Pi, F. How Apps on Android Share Data with Facebook – Privacy International 2018. 51 (2018).

12.        Bartlett, J. A Vision of the Dark Future of Advertising – 2069. Medium (2019). Available at: https://medium.com/s/2069/a-vision-of-the-dark-future-of-advertising-40347c6ed448. (Accessed: 8th January 2019)

13.        Fogg, B. J. Mass Interpersonal Persuasion: An Early View of a New Phenomenon. in Persuasive Technology (eds. Oinas-Kukkonen, H., Hasle, P., Harjumaa, M., Segerståhl, K. & Øhrstrøm, P.) 5033, 23–34 (Springer Berlin Heidelberg, 2008).

14.        Frischmann, B. & Deven, D. The Promise and Peril of Personalization. (2018).

15.        Understanding 5G: Perspectives on future technological advancements in mobile. (GSMA, 2014).

16.        The 5G era: Age of boundless connectivity and intelligent automation. (GSMA, 2017).

17.        Thorpe, S. J. & Fabre-Thorpe, M. Seeking Categories in the Brain. Science 291, 260–263 (2001).

18.        Libet, B. Unconscious cerebral initiative and the role of conscious will in voluntary action. Behav. Brain Sci. 8, 529–539 (1985).

19.        Brasil-Neto, J. P., Pascual-Leone, A., Valls-Sole, J., Cohen, L. G. & Hallett, M. Focal transcranial magnetic stimulation and response bias in a forced-choice task. J. Neurol. Neurosurg. Psychiatry 55, 964–966 (1992).

20.        Schultze-Kraft, M. et al. The point of no return in vetoing self-initiated movements. Proc. Natl. Acad. Sci. 113, 1080–1085 (2016).

21.        xxxxxxxxx. (2018). Available at: https://www.xxxxxxxxx.com/about. (Accessed: 27th December 2018)

22.        xxxxxxx. xxxxxxx (2018).

23.        Zuboff, S. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. (PublicAffairs, 2019).

24.        Introducing FBLearner Flow: Facebook’s AI backbone. Facebook Code (2016).

25.        Levin, S. ‘They don’t care’: Facebook factchecking in disarray as journalists push to cut ties. The Guardian (2018).

26.        Williams, J. Stand out of our Light: Freedom and Resistance in the Attention Economy. (Cambridge University Press, 2018). doi:10.1017/9781108453004

27.        Cubitt, S. Supernatural futures. in FutureNatural: nature, science, culture (1996).

28.        Galloway, A. R. Protocol: How Control Exists After Decentralization (Leonardo). (MIT Press, 2004).

29.        Norbert Wiener. Cybernetics Or Communication And Control In The Animal And The Machine Norbert Wiener. (1948).

30.        Wiener, N. The human use of human beings: cybernetics and society. (Free Association Books, 1989).

31.        Bernays, E. L. The Engineering of Consent. Ann. Am. Acad. Pol. Soc. Sci. 250, 113–120 (1947).



There has recently been a surge of development in augmented reality (AR) technologies that has led to an ecosystem of hardware and software for AR, including tools for artists and designers to accelerate the design of AR content and experiences without requiring complex programming. AR is viewed as a key “disruptive technology” and future display technologies (such as digital eyewear) will provide seamless continuity between reality and the digitally augmented. This article will argue that the technologization of human perception and experience of reality, coupled with the development of artificial intelligence (AI)–based natural language assistants, may lead to a secular re-enchantment of the world, in the sense outlined by Charles Taylor, where human existence is shaped through AR inhabited by advanced personal and social AI agents in the form of digital avatars and daemons, and that enchantment has been persistent throughout the formation of modernity and is being rekindled by the integration of AI in the plane of AR.


Below are a few slides from a talk associated with this paper delivered at the Technology & New Media Research Cluster meeting in the Department of Sociology, University of Cambridge on Monday 29 October 2018.

2019 Cambridge Muslim College Religion & Science Conference

 “Mind & World


Humans & Machines”

Saturday 4th May 2019

Crausaz Wordsworth Building, Robinson College, Cambridge

Cambridge Muslim College is pleased to announce the 2019 Religion & Science conference supported by the John Templeton Foundation on “Mind & World for Humans & Machines” to be held on Saturday 4th May 2019.

The conference follows on from two earlier conferences on the themes of intelligence and consciousness. The forthcoming conference will continue to focus on scientific, philosophical and theological perspectives on intelligence and will further aim to address how scientific and technological developments are informing our understanding of how humans and machines represent and make epistemic contact with the world, as well as the nature of autonomy, agency and action for humans and machines.

In the past year, AI has continued to advance rapidly yielding notable developments in key areas such as natural language processing and machine vision, which has renewed ambitions of attempting to develop artificial general intelligence. AI systems are now capable of significantly surpassing human capabilities in increasingly complex domains and are having a substantial impact on the nature of research in the physical sciences and humanities. In addition, there has been progress in designing systems that outperform highly ranked players in challenging games that require long-term planning, strategic decision making, and reasoning based on the imperfect knowledge of the microworlds of the games.

However, artificial agents are not intended to remain confined to the virtual microworlds in which they are gestated and trained, and their activities are gradually being transposed into the physical world. In concert with this is the transposition of human activity and presence into the digital world of artificial agents and machine forms of intelligence. This new informational environment is viewed as subsuming both cyber and physical space into a unified artificially constructed virtual world that is better suited to the capacities of machines than humans. The intersection of humans and machines in the shared space of the “information sphere” entails what Luciano Floridi has described as a “re-ontologization of our environment and of ourselves.

The conference will therefore consider issues arising from the reconstruction of mind and world and how these developments are challenging our understanding of the nature of mind and world from scientific, philosophical and theological perspectives.

Enchanted by Disenchantment

A critical problem facing the Muslim community is a failure to recognise how techno-science represents a comprehensive religious worldview comprising a detailed cosmogony, mythology, ideology, soteriology and eschatology that imposes its own narrative of the origin, past, present and future of mankind and the universe.

In many ways, science appears to have been in the ascendency and theology has been in retreat. The products of science and technology that pervade every facet of daily life are wielded as powerful arguments to support the metaphysical claims of modernity and postmodernity. In reality, modern science eliminates thought from processes related to knowledge by reducing thought to know-how, and the technological environment is structured to place humans in a cybernetic feedback loop where thought is eliminated from consumption, to “short-circuit the process of reflection that stands between one’s recognition of a desire and its fulfilment via the market.” (Greenfield 2017).

The rapid pace of science and technology development has meant that by the time ordinary religious scholars have realised there may be deep issues to address, entire paradigms of discourse have been transformed. In its most aberrant manifestations, the scientific view of nature is regarded as absolute and is used to make religion innocuous by “the disenchantment of the world.” Everything outside the empirical sciences is rejected as false, and nature, life and mind are reduced to mechanistic principles, yet this view disregards the prior metaphysical commitments underlying scientific research programmes.

At these points, methodology veers into metaphysics with the belief that science will inevitably solve the remaining mysteries of the universe, decrypt the mystery of the mind and replenish the biome. There is widespread hope that technology will reconstruct the natural world from the destruction it has itself produced by chains of unintended consequences. Society is, “Counting on technology to rescue us from the very impasse into which it has led us,” in the words of Dupuy.

The crisis for the Muslim community is that many have become enchanted by the promise of instrumental rationality, which is the driver of disenchantment. Hence, for increasing numbers of Muslims, (i) knowledge has been recast in terms of know-how, (ii) thought, reflection, understanding and meaning have been separated from the functions of the intellect, (iii) and the sacred dimension of reality is rendered causally ineffective to the point of its elimination.

However, recent works re-examining secularisation theory show that modernity constitutes a false paradigm and the “myth of disenchantment” has functioned as a “regulative ideal” to be modulated towards the suppression or revitalisation of the supernatural, to serve the dictates of power and ideology. If this view is accurate, the Muslim community is, in effect, enchanted by (the myth of) disenchantment.

The consequence of this situation is that the Muslim community is intellectually, spiritually and morally unequipped for the fact that we are on the cusp of new forms of quasi-religion for postmodernity. For example, transhumanism has been viewed as a science-based alternative to religion that claims to achieve all of the traditional supernatural goals of religion. Such futuristic visions gain plausibility amongst the public by progress in AI, cognitive science, psychology, neuroscience and nanotechnology, the convergence of which represents the apotheosis of Western metaphysics. According to Dupuy, “…the aim of this distinctively metaphysical program is to place humankind in the position of being divine maker of the world, the demiurge, while at the same time condemning human beings to see themselves as out of date.”

Approaching solutions to these problems will need to begin by recognising the underlying religious features of techno-science that stand in opposition to the Islamic worldview and not succumbing to the silicon grip of the techno-political economic system that is characterised by the archetypes of Haman, Firawn and Qarun.

(This short essay was written in November 2018 in response to a question on the key problem facing the Muslim ummah and possible solutions.)

A Brief Comment on AI Ethics and Islam

The following is the second part of a brief postscript of my thoughts, from an Islamic perspective, on the three key areas related to AI Ethics discussed at the Faith and AI Consultation I attended earlier this year.

Written on 2 May 2018 (extended on 18/10/2018)

A Brief Comment on AI Ethics and Islam

In general, Muslim society has a highly favourable outlook towards technology and would welcome a managed and thoughtful deployment of AI that is conducive to the improvement of all life, society and the environment.

Ethical questions in Islam are addressed from within one of the four frameworks of Islamic legal reasoning (the four schools), which may be summed up as seeking five key objectives, namely, protecting life, honour, religion, wealth and lineage.

Human Augmentation and Personhood

Islamic ethical guidelines for new forms of augmentation based on emerging AI and converging technologies will require case-by-case consideration for each class and category of augmentation in relation to the faculty, sense or trait that is being sought to be augmented, and the degree to which it is to be augmented.

Additional considerations for Islamic scholars may include whether an augmentation interferes with Islamic rituals, namely, performative acts of worship, however, if deemed permissible, any augmentation would not alter personhood and the status of obligation with respect to the five pillars of Islamic practice.

However, many crucial questions remain to be answered, such as can these technologies be introduced without significant disruption, stratification and alienation in society? Does the desire to be augmented express a fundamental conceit against the natural form of God’s creation? What would be the consequences for human experience and are these augmentations conducive to the primary goal of life, according to Islamic teachings, which is to journey to God with sound belief and purified intentions?

Use of AI in Life and Death Decisions

The sanctity of human life is paramount in Islam, hence Islamic legal reasoning is most likely to yield an approach that reflects an overabundance of caution in the use of neural networks for such decisions. The role of mercy and compassion are at the very root of Islamic teachings, and directly invoked before undertaking any action.

A possible view for involving AI in decision making that is full of mercy and compassion may be to have AI trained using supervised learning techniques on data that reflects human made policies that encapsulate core virtues and key values. Once the underlying patterns of human decisions in particular domains are learned, only systems that achieve zero-errors on test sets should then be considered for deployment, followed by rigorous testing against a range of failure modes.

If the benefits and robustness of AI automated decision making in time-critical scenarios is established, then it may be considered favourable to adopt such systems, with continuous human auditing and verification. However, where decisions are not needed urgently, explainable, transparent advice for implementation by a human actor may be preferable.

Implications of AI for Faith Communities

As machines are becoming endowed with increasing degrees of intelligence, the general concern from all spheres of society may be summarised as shifting from concern over the misuse of technology by us to the misuse of us by technology.

In this connection, and with respect to prominent discussions on AI explainability, interpretability, accountability and transparency, the prescient warning of Hannah Arendt in the prologue to her 1958 work on “The Human Condition” acquires vital new importance. Here, Arendt describes a situation where we become unable to understand (in normal expression through thought and speech) the things we are, nevertheless, able to do through technology:

“In this case, it would be as though our brain, which constitutes the physical, material condition of our thoughts, were unable to follow what we do, so that from now on we would indeed need artificial machines to do our thinking and speaking.

If it should turn out to be true that knowledge (in the modern sense of know-how) and thought have parted company for good, then we would indeed become the help-less slaves, not so much of our machines as of our know-how, thoughtless creatures at the mercy of every gadget which is technically possible, no matter how murderous it is.”

2018 Religion and Science Conference on Artificial Intelligence

‘Artificial Intelligence

15th & 16th September 2018

WYNG Gardens, Cambridge

Conference Introduction

As many of you will be aware, AI has had multiple waves of interest over the past few decades. At the peak of an earlier wave, in the late 1990s, the most prominent AI Lab at the time at MIT had an adjunct theologian working alongside AI technologists, scientists and engineers.

During this time, a series of lectures under the title “God and Computers: Mind, Machine and Metaphysics” ran over the semester, which involved ten talks from eminent AI practitioners of the era, who directly addressed issues of theology and philosophy raised in connection with their research in AI.

At our present time, the level of intellectual philosophical discussion is eclipsed by the level of technical work in the field and, of course, the situation is more acute for theology, where the level of discussion rarely goes beyond the superficial.

Hence, it is assumed that questions of intelligence and mind are solely the province of cutting edge science and technology, however, this has not always been the case, for example, a founding figure of the cybernetic pre-cursor to AI, Warren McCulloch, discussed how he was motivated by the deeply philosophical question of:

“What is a Number, that a Man May Know it, and What is a Man, that He May Know a Number?”

Looking further back into history, we find that digital metaphysics has not been merely incidental to the project of AI but is at the very origins of the concept of the digital.

It is now well known that Newton was deeply religious, however, the revolutionary figure, whose ideas gave birth to digital computing, also followed a deeply religious path. I am referring here to George Boole, whose work has been fundamental.

In 1833, aged seventeen, Boole described a mystical experience to quote, “The thought flashed upon him suddenly one afternoon as he was walking across a field that his main ambition in life was to explain the logic of human thought and to delve analytically into the spiritual aspects of man’s nature through the expression of logical relations in symbolic or algebraic form.”

He said, “We are not to regard Truth as the mere creature of the human intellect.”

He further said, ”It is given to us to discover Truth.

The purpose of his study of mathematics and nature was simply, “To justify the ways of God to Man.” According to his biographer, “His binary algebra, in which the number one symbolised the universal class, quite possibly reflected his Unitarian belief in the unity of God.

Hence, not only have the questions posed by AI and neuroscience been the domain of philosophers and theologians for millennia, philosophy and theology have been at the very origins of the underlying principles behind the technology, so it is most obvious to consider the latest findings about the nature of intelligence from the perspectives of philosophy and theology, as we will do so over the next two days of this conference, with papers from prominent figures in AI of the present day.

Faith and AI

Earlier this year (May/June 2018), I was honoured to be invited to attend a wonderful consultation on Faith and AI at St George’s House, Windsor Castle, organised by St George’s House, The Alan Turing Institute and The Faraday Institute. The following is a brief post-script of my thoughts on the below question that characterises the broad theme of the consultation:

In what ways might those with religious conviction be interested in the development and deployment of AI?”

At one end, religious leaders will need to articulate guidance on ethical questions and concerns, issues of trust, transparency and accountability, and on societal challenges, such as the future of work. At another end, where discussion on practical matters overflows into philosophical discussions beyond the realm of scientific enquiry, religion will need to address issues about human augmentation, and new beliefs in superintelligence, technological singularity and the deification of man as maker and master of the future.

The core of the Islamic message is to direct humankind to God. To this end, the Qur’anic narrative presents numerous stories and parables that serve as instructive archetypical examples for reflection to re-orientate the individual and collective journey to God. In the Qur’anic narrative about Moses is an incident about his people being led astray by an enigmatic figure, known as al-Samiri, that took place after they had reached safety and security from their pursuers by crossing the Red Sea.

While Moses was absent responding to the call of God, al-Samiri claimed God and Moses had abandoned them and began instigating the people to cast the gold they were carrying into an effigy of a calf. Imagining himself to have seen some supernatural phenomena no one else had seen at the footprint of an archangel, he took a handful of earth from the place of the footprint and threw it at the calf, which then seemed to utter a lowing sound. The people became enchanted by their own creation and their reverence of God was diverted to the idol. When Moses returned, the excuse of al-Samiri for instigating this diversion, and the ensuing discord amongst the people, was nothing more than following the self-suggestion of an egoistic whim.

Many experts have commented on the hubris accompanying the development of AI and we have already glimpsed its potential to bring both immense good and harm. Hence, followers of religion may wish to ask whether the AI entities we are developing are diverting mankind from higher purposes in a similar fashion with ersatz reproductions of intelligence, speech, vision and autonomous action in an egoistic race for profits and domination of markets. Will we become enchanted and enchained to contemporary idols in the form of seemingly omnipresent self-aware systems, which simulate only the husk of intelligence with intelligible yet meaningless utterances that are nothing more than the lowing of hollow, lifeless simulacra?

In Islam, intelligence is multi-dimensional, linking man to the material world, higher orders of meaning and to God. Intelligence is a key quality that differentiates humans from animals, it was by virtue of Adam being created by God with a rational faculty and being taught the universal archetypes (“all of the names”) that God instructed the angels and Iblis (the devil) to bow to him. The centrality of the intellect in Islam led many Muslim theologians and philosophers to develop detailed psychological theories within their works, and for centuries, Muslim scholars discussed issues of free will, agency, knowledge, logic and reasoning, the nature of perception, mental representation and imagination.

These capacities for meaning, understanding and higher order cognition remain the most formidable challenges towards artificial general intelligence, if at all possible. The potent potential of human intelligence is commensurate with its key purpose of coming to know Ultimate Being – God, and human beings are tasked with moderating their intelligence by numerous other qualities such as mercy, compassion and wisdom. What, then, may be the consequences of deploying machines endowed with only the narrowest aspects of our intelligence in the form of distilled instrumental rationality that is devoid of embodied, social and spiritual dimensions?