top of page

Can We Engineer Consciousness? Biomedical Engineering at the Edge of Neurosience

  • Writer: Taicia Kiuna
    Taicia Kiuna
  • Dec 4
  • 14 min read

For decades, consciousness has been treated as one of science’s most complicated mysteries - something almost philosophical, irreplicable and too abstract for many engineers. The idea of possibly engineering something that we can’t even define has sounded impossible because: how can you design something that can’t be defined? How do you measure and then recreate an experience? How do you study something as complex as the mind? 


But this boundary is slowly dissolving as tools and innovations in neural engineering, computational neuroscience and biomedical technology continue to advance, posing the very real question: what if consciousness really isn’t beyond our engineering capabilities? We have already found ways to manipulate consciousness with anaesthesia, restore communication through brain-computer interfaces and grow miniature brain organoids that behave in unexpected but complicated ways. None of these technically create consciousness but they are already pushing the boundary of what was previously thought to be unreachable.


In this article, I explore whether consciousness can be approached not as a philosophical concept but as an engineering challenge. Through looking at what can be measured, stimulated and influenced inside the brain I want to understand whether the ‘mind’ is truly as complex as it seems or whether science is slowly edging closer to being able to recreate it.


  1. The Challenge: Defining Something We Still Don’t Fully Understand


Before anything can be engineered, it first needs to be defined. However, consciousness is uniquely difficult because it consists of multiple components and different areas of the brain, leading to many different viewpoints and opinions on what it actually is. 


Previously, it was believed that consciousness depended on the proper functioning of midline brain structures, while the content of an experience was thought to arise from activity in particular regions of the cerebral cortex (Jang and Lee, 2015). The maintenance of behavioural arousal and consciousness could also be attributed to the neurons of the ascending reticular activation system (Jones 2003, Englo et al. 2007). However, more recently, some researchers believe consciousness is caused by the frontal region of the brain - areas such as the central anterior and prefrontal cortex. While others argue the rear regions of the brain - including the occipital/parietal and central posterior regions - are responsible (Koch et al. 2016, Seth 2018).


Despite these many opinions, they all support the fact that consciousness doesn’t arise from a ‘location’ within the brain, but is rather an emergent property of our brain system, making it difficult to define. Although a set definition may be difficult, it has been widely agreed that consciousness is made up of two key components: wakefulness and awareness (Calabrò et al. 2015)


Wakefulness refers to the level of consciousness, whereas awareness is of the environment and of self which is the content of consciousness. Medically, consciousness refers to a person’s awareness of themselves and their surroundings, as well as their ability to respond to external stimuli and internal needs (Calabrò et al. 2015).


Despite this slightly vague definition, this hasn’t stopped biomedical engineers from approaching consciousness not as a single object, but as a set of measurable biological processes. These processes include: patterns of neural synchronisation (Dai R et al. 2024), information integration across brain networks (Tononi 2004) and the ability to redirect attention or react to stimuli and predictable transitions (eg. sleep —> wake —> anaesthesia) (Fan JM et al. 2023). Engineers aren’t trying to define consciousness philosophically but rather they are trying to quantify it - and quantification is the first step towards engineering.


  1. Can Consciousness Be Measured?

One of the biggest scientific breakthroughs in understanding consciousness emerged from research into anaesthesia, comas and consciousness disorders. All of these conditions raise the same question: how do you determine whether someone is conscious when they can’t move, speak or respond? Traditional behavioural assessments are irrelevant in situations like this, so biomedical engineers began searching for something objective - measurable neural signatures of consciousness.


Perturbational Complexity Index (PCI)

The development of the PCI stemmed from the medical issue of determining whether one was conscious. While also being able to more reliably distinguish between people who are conscious but unable to respond, (e.g people with locked-in syndrome who are aware but can’t respond) and people who are actually unconscious. Here, Casali et al. (2013) extended on their previous work related to electrical correlates of consciousness to define an electroencephalographic-derived index of human consciousness - the PCI.


PCI measures the brain’s response to direct stimulation using transcranial magnetic stimulation (TMS). By calculating the likely sources of signals in the brain and comparing results from different known levels of consciousness, they were able to derive PCI values. In 32 awake and healthy subjects, these values ranged between 0.44-0.67 but dropped down all the way to 0.18-0.28 for subjects in nonrapid eye movement (NREM) sleep. To test whether artificial induction of unconsciousness -  anaesthesia - had the same effect on PCI, the authors tested patients who were administered with various amounts of different anaesthetics. These drugs also caused low ‘unconscious’ values for the PCI, with midazolam in the range of 0.23-0.31, propofol 0.13-0.30 and xenon 0.12-0.31. To verify the validity of these results, patients in a vegetative state, minimally conscious state and those with locked-in syndrome were all tested. As expected, the ones in a vegetative state were clearly ‘unconscious’ and in the range of 0.19-0.31; the ones who had locked-in syndrome were in the range of ‘conscious’ with results between 0.51-0.62; while those who were minimally conscious showed intermediate values of 0.32-0.49 (Casali et al. 2013).


The key idea is simple: a conscious brain produces rich and integrated patterns of activity while an unconscious brain (during deep sleep, anaesthesia or a coma) produces simpler, more stereotyped responses. From here, it can be concluded that the higher the complexity of the neural signals detected, the higher the probability of consciousness.


Integrated Information Theory (IIT)

Another foundational framework of consciousness is integrated informational theory (IIT), which was first proposed by Tononi (2004) and later expanded into a full mathematical framework (Tononi 2014). IIT is based on the concept that consciousness depends on a system’s ability to integrate information into something whole. More simply, every experience is highly informative (unique from others) and integrated, meaning it can’t be broken down into independent parts without losing the experience itself.


IIT predicts that when neural networks are unable to interact, such as during anaesthesia or certain brain injuries, consciousness fades. Researches have been able to prove this hypothesis by measuring and comparing the relationship between network criticality (a balanced state between a large variation of functional and structural network configurations), Ф (a mathematical measure of a system’s consciousness) and the level of human consciousness. Using a large-scale brain network model and electroencephalography (EEG) recorded during various levels of human consciousness under general anaesthesia, the following relationship was established. At a conscious resting state, the highest levels of ф and critically were recorded, but slowly reduced as the response rate of the patients decreased (Kim H 2019). This suggests that for a large ф in the human brain, network criticality is a necessary condition, while also supporting the idea that integration of information in the brain is not just correlated to, but may potentially be the main condition required for consciousness.


Why do these frameworks matter?

Neither PCI nor IIT claim to ‘solve’ the mystery of consciousness and they both avoid making philosophical claims about subjective experience, however, they do offer something critical for engineers: objective and quantifiable tools that allow us to predict, track and even manipulate states of consciousness.


This has real implications in: diagnosing minimally conscious patients with high sensitivity/specificity (Sinitsyn 2020), detecting covert awareness (Aubinet 2025) and building the foundations for future neuro-computational interfaces (Pan J 2022).



  1. What Can We Already Do?

Although we may still not be able to recreate a full conscious experience, biomedical engineers have been able to control and manipulate certain areas of consciousness. They have even been able to go as far as being able to restore lost functions, such as speech, in certain patients using neurally integrated devices.


Anaesthesia and Reversible Suppression of Consciousness 

Modern anaesthesia is one of the clearest and most widespread examples of biomedical engineering manipulating consciousness safely and reversibly. Contemporary anaesthesiology relies both on pharmacology and on computational models of neural response, EEG-based monitoring systems and closed-loop control algorithms (a feedback mechanism that automatically adjusts the administration of anaesthesia to maintain a set target effect such as the desired depth of anaesthesia) which all help predict and control the transitions between consciousness and unconsciousness.


Neural monitoring techniques such as Bispectral Index (BIS) are tools used to maintain the stable loss of consciousness and prevent awareness during surgery (intraoperative awareness) (Purdon et al. 2013). BIS analyses EEG patterns to provide a single numerical value from 0 to 100 to represent a patient’s level of brain activity (0 means complete brain inactivity, 100 corresponds to an awake state), allowing for the appropriate depth of anaesthesia and faster waking time. For general anaesthesia it is regarded that values between 40 and 60 are adequate, while anything below 40 refers to a deep hypnotic state (Mathur et al. 2023).


Brain-Computer Interfaces (BCIs)

Brain-computer interfaces create a direct pathway between conscious intentions and external actions. Although they don’t generate consciousness but rather interface with it, they can translate neural patterns linked to attention, intention or imagined action into meaningful outputs.


Recent breakthroughs in this field include:


Restored speech for patients with anarthria (the loss of ability to articulate speech) and for paralysed patients, by using BCIs to decode neural signals from the speech cortex allowing us reconstruct words and sentences in real time (Moses et al. 2021)


Recovered movement through spinal stimulation for patients with complete spinal cord injuries using closed-loop spinal-brain interfaces - which have been able to restore voluntary walking (Lakshmipriya et al. 2024).


Digital memory-assistance prototypes - neuroprosthetic devices that can enhance memory encoding by stimulating hippocampal patterns (Hampton et al. 2018), paving the way for engineered cognitive support systems.


Neuromodulation (DBS & TMS)

Biomedical engineers can also modulate conscious states through invasive and non-invasive neural stimulation technologies such as deep brain stimulation (DBS) and transcranial magnetic stimulation (TMS).


DBS uses an implanted, battery-operated device to send electrical signals and stimulate specific areas of the brain. The stimulation of the subthalamic nucleus in the brain has been directly attributed to reducing symptoms related to Parkinson’s disease such as akinesia, rigidity and tremors (Benabid et al. 2000).


TMS is a non-invasive procedure that involves using magnetic fields to stimulate nerve cells in the brain. Studies using it have shown that in some patients with consciousness disorders, stimulation can increase responsiveness or shift a patient from unresponsive to a minimally conscious state (Gosseries et al. 2014). While other studies also show its effectiveness in treating medication-resistant depression (George & Post 2011). 


  1. Synthetic Consciousness: Can We Build It? 

This is the most controversial aspect of the discussion around consciousness: could the processes that are responsible for and contribute to consciousness be reproduced outside of the brain? As mentioned before, the hardest part of engineering consciousness is the fact that it is not one singular process but rather many different processes and signals all integrated together to produce an experience. However, there are some experimental approaches that engineers are actively pursuing in the hope to recreate something so complex.


Brain Organoids

Miniature brain organoids grown from human pluripotent stem cells (cells that can self-renew and differentiate into almost any cell in the body) can organise themselves into layered neural tissue and develop spontaneous electrical activity. Several groups have shown that these organoids can generate coordinated network activity and oscillatory dynamics, reminiscent of early developmental brain activity (Mancinelli 2025). They have also been shown to respond to pharmacological manipulation in modulating their patterns (Sharf et al. 2022). Some organoids have even displayed sleep-like oscillations and complex spatial temporal patterns that were once thought to require an intact brain (Sharf et al. 2022).


The results don’t mean that organoids are conscious in any ordinary way, but they do show that lab-grown neural tissue can reproduce the complicated functional dynamics of a whole brain. However, the creation and experimentation on these organoids raises many ethical concerns which make it harder to advance in this field.


Large-Scale Neural Simulations

Supercomputing products, such as the Blue Brain Project and related efforts, can now build biologically detailed simulations of neural tissues and help provide a deeper understanding of: the electrical and chemical activity of neurons, synaptic plasticity and neural coding (Markram 2006, Arachchige 2023). These simulations can reproduce patterns of activity as well as emergent properties and dynamics observed in real neural networks. This is a large step towards simulating larger regions of the brain and ultimately the entire human brain - if consciousness were to emerge from a particular network of activity, then ideally accurate large-scale simulations could reproduce elements of those dynamics (Arachchige 2023). However, whether a simulation actually represents a conscious experience is yet to be resolved.


Hybrid Bio-AI Systems (neurons + electronics + learning algorithms)

Hybrid bio-AI systems combine living neurons, embedded microelectrics and machine-learning driven feedback loops to create a platform where bio-networks can perform adaptive and targeted functions. Although these systems don’t create consciousness, they provide one of the most direct pathways linking the foundational mechanisms of engineering and of consciousness.


Cultured neuronal networks grown on multielectrode arrays can already learn from environmental feedback in real time. In the widely cited DishBrain experiment, Kagan et al. (2022) demonstrated that when human and mouse cortical neurons were interfaced into a virtual environment (they embedded the cultures into a simulated game-world that mimics the arcade game “Pong”) they were able to adapt and modify their firing patterns to show improved task performance over time. In other words, the cells were able to learn and adapt their gameplay to perform better in as little time as 5 minutes, which was not observed in the control. The neurons were not simply reacting, but rather adapting, predicting and optimising their behaviour based on continuous sensory input and feedback, exhibiting ‘sentient’ and intelligent behaviour - behaviour that was previously associated with conscious systems. 


Hybrid systems such as the BrainDish don’t claim to generate subjective experience but rather they engineer the preconditions often associated with conscious systems: adaptive behavioural learning (Kagan et al. 2022), dynamic integration of sensory feedback (Milford 2023), electrical pattern formation and continuous internal updates across networks. These systems allow engineers to construct, modify and study the different processes that, in biological brains, correlate with consciousness. They serve as test subjects where the mechanisms and processes of consciousness can be engineered, without claiming to recreate conscious experience itself.



  1. Is Consciousness An Engineering Problem?

Although consciousness is still not yet an engineering problem in the traditional sense, biomedical research is rapidly assembling the foundational components of consciousness required to approach it systematically. Across fields such as neuroscience, neurotechnology and synthetic biology, evidence shows that we can now probe, manipulate and reconstruct many of the biological processes associated with conscious experience.


As explored in my research above we currently have a lot of the pieces that we need to hopefully create consciousness: We can measure the complexity associated with consciousness; manipulate consciousness using electrical, magnetic or chemical control; stimulate neural processing at increasing scales; grow functional neural tissue outside the body and interface directly with conscious intention using BCIs. 


Add together all of this and a striking possibility emerges: if consciousness arises from organised biological processes, then engineering these processes may eventually allow us to recreate aspects of consciousness experience. However, the key word is ‘eventually’. We are still decades away from being able to build anything remotely close to a conscious system - and we may not ever be able to overcome the challenges and barriers associated with this - but biomedical engineering is already operating closer to that boundary.



  1. Why This Matters for the Future of Medicine

Currently, consciousness isn’t something that can be designed or manufactured, yet across neuroscience, computational biology and biomedical engineering, the tools required to make consciousness engineerable are slowly evolving. With every breakthrough, a new piece of this highly complicated puzzle slowly falls into place, and with every new technology a new piece of this puzzle is discovered.


This is especially significant in the medical industry because consciousness is at the heart of what makes neurological disease devastating: comas, dementia, strokes, traumatic brain injuries, anaesthesia awareness and many other neuropsychiatric disorders all disrupt the mechanisms that generate conscious experience. Understanding and being able to control aspects of consciousness will allow us to eventually restore, stabilise or even rebuild these mechanisms and enable innovations such as: precision anaesthesia with no risk of awareness (Purdon 2013); reliable prognoses for comas and other consciousness disorders (Edlow 2021); targeted therapies for psychiatric and cognitive disorders (Sullivan 2021); neural prosthetics capable of replacing damaged cognitive functions (Hampson 2018); brain repair technologies following brain injury (Li CP 2025) or damage and BCI rehabilitation to restore lost function (Willett 2023).





Consciousness may never be reduced to circuits, equations or numbers, but exploring this possibility is driving some of the most important innovations of our time. For the first time in history, biomedical engineering is close enough to the boundary that exploring consciousness is no longer a philosophical idea, but is now a promising step towards the next chapter of medical innovation. If engineering consciousness is ultimately beyond our reach, then the knowledge gained in trying will still reshape clinical care, from restoring speech and memory to repairing injured brains. But if it is possible, even partially, then we stand at the beginning of a new era of medicine where we not only treat the brain, but design and rebuild the fundamental processes that underlie conscious experience itself.








Bibliography:

  1. Jang, S. H., & Lee, H. D. (2015). Ascending reticular activating system recovery in a patient with brain injury. Neurology, 84(9), 997–999.

  2. Jones, B. E. (2003). Arousal systems. Frontiers in Bioscience, 8, S438–S451.

  3. Englot, D. J., D’Haese, P. F., Konrad, P. E., Jacobs, M. L., Gore, J. C., Abou-Khalil, B. W., et al. (2017). Functional connectivity disturbances of the ascending reticular activating system in temporal lobe epilepsy. Journal of Neurology, Neurosurgery & Psychiatry, 88(11), 925–932. https://doi.org/10.1136/jnnp-2017-315732

  4. Koch, C., Massimini, M., Boly, M., & Tononi, G. (2016). Neural correlates of consciousness: Progress and problems. Nature Reviews Neuroscience, 17(5), 307–321. https://doi.org/10.1038/nrn.2016.22

  5. Seth, A. K. (2018). Consciousness: The last 50 years (and the next). Brain and Neuroscience Advances, 2, 1–6.

  6. Calabrò, R. S., Cacciola, A., Bramanti, P., & Milardi, D. (2015). Neural correlates of consciousness: What we know and what we have to learn. Neurological Sciences, 36(4), 505–513. https://doi.org/10.1007/s10072-015-2072-x

  7. Dai, R., Jang, H., Hudetz, A. G., Huang, Z., & Mashour, G. A. (2024). Neural correlates of psychedelic, sleep, and sedated states support global theories of consciousness. bioRxiv. https://doi.org/10.1101/2024.10.23.619731

  8. Tononi, G. (2004). An information integration theory of consciousness. BMC Neuroscience, 5, 42. https://doi.org/10.1186/1471-2202-5-42

  9. Fan, J. M., Kudo, K., Verma, P., Ranasinghe, K. G., Morise, H., Findlay, A. M., et al. (2023). Cortical synchrony and information flow during transition from wakefulness to light non–rapid eye movement sleep. Journal of Neuroscience, 43(48), 8157–8171. https://doi.org/10.1523/JNEUROSCI.0197-23.2023

  10. Casali, A. G., et al. (2013). A theoretically based index of consciousness independent of sensory processing and behavior. Science Translational Medicine, 5(198), 198ra105. https://doi.org/10.1126/scitranslmed.3006294

  11. Kim, H., & Lee, U. (2019). Criticality as a determinant of integrated information Φ in human brain networks. Entropy, 21(10), 981. https://doi.org/10.3390/e21100981

  12. Brain Sciences. (2020). Article 917. MDPI. https://www.mdpi.com/2076-3425/10/12/917

  13. Aubinet, C., Claassen, J., Edlow, B. L., Fischer, D., Gosseries, O., Koch, C., et al. (2025). Covert consciousness: What’s in a name? Brain. https://doi.org/10.1093/brain/awaf349

  14. Pan, J., Xiao, J., Wang, J., Wang, F., Li, J., Qiu, L., et al. (2022). Brain–computer interfaces for awareness detection, auxiliary diagnosis, prognosis, and rehabilitation in patients with disorders of consciousness. Seminars in Neurology, 42(3), 363–374. https://doi.org/10.1055/a-1900-7261

  15. Purdon, P. L., Pierce, E. T., Mukamel, E. A., Prerau, M. J., Walsh, J. L., Wong, K. F., et al. (2013). Electroencephalogram signatures of loss and recovery of consciousness from propofol. Proceedings of the National Academy of Sciences, 110(12), E1142–E1151. https://doi.org/10.1073/pnas.1221180110

  16. Mathur, S., Patel, J., Goldstein, S., Hendrix, J. M., & Jain, A. (n.d.). Bispectral index. StatPearls Publishing. https://www.ncbi.nlm.nih.gov/books/NBK539809/

  17. Moses, D. A., Metzger, S. L., Liu, J. R., Anumanchipalli, G. K., Makin, J. G., Sun, P. F., et al. (2021). Neuroprosthesis for decoding speech in a paralyzed person with anarthria. New England Journal of Medicine, 385(3), 217–227. https://doi.org/10.1056/NEJMoa2027540

  18. Lakshmipriya, T., & Gopinath, S. C. B. (2024). Brain–spine interface for movement restoration after spinal cord injury. Brain & Spine, 4, 102926. https://doi.org/10.1016/j.bas.2024.102926

  19. Hampson, R. E., Song, D., Robinson, B. S., Fetterhoff, D., Dakos, A. S., Roeder, B. M., et al. (2018). Developing a hippocampal neural prosthetic to facilitate human memory encoding and recall. Journal of Neural Engineering, 15(3), 036014. https://doi.org/10.1088/1741-2552/aaaed7

  20. Benabid, A. L., Koudsié, A., Benazzouz, A., Fraix, V., Ashraf, A., Le Bas, J. F., et al. (2000). Subthalamic stimulation for Parkinson’s disease. Archives of Medical Research, 31(3), 282–289. https://doi.org/10.1016/S0188-4409(00)00077-1

  21. Gosseries, O., Thibaut, A., Boly, M., Rosanova, M., Massimini, M., & Laureys, S. (2014). Assessing consciousness in coma and related states using transcranial magnetic stimulation combined with electroencephalography. Annales Françaises d’Anesthésie et de Réanimation, 33(2), 65–71. https://doi.org/10.1016/j.annfar.2013.11.002

  22. American Journal of Psychiatry. (2010). Article. https://psychiatryonline.org/doi/abs/10.1176/appi.ajp.2010.10060864

  23. Mancinelli, S., Bariselli, S., & Lodato, S. (2025). The emergence of electrical activity in human brain organoids. Stem Cell Reports, 20(9), 102632. https://doi.org/10.1016/j.stemcr.2025.102632

  24. Sharf, T., van der Molen, T., Glasauer, S. M. K., et al. (2022). Functional neuronal circuitry and oscillatory dynamics in human brain organoids. Nature Communications, 13, 4403. https://doi.org/10.1038/s41467-022-32115-4

  25. Markram, H. (2006). The Blue Brain Project. In Proceedings of the 2006 ACM/IEEE Conference on Supercomputing (pp. 53–es). https://doi.org/10.1145/1188455.1188511

  26. Arachchige, A. S. P. M. (2023). The Blue Brain Project: Pioneering the frontier of brain simulation. AIMS Neuroscience, 10(4), 315–318. https://doi.org/10.3934/Neuroscience.2023024

  27. Kagan, B. J., Kitchen, A. C., Tran, N. T., Habibollahi, F., Khajehnejad, M., Parker, B. J., et al. (2022). In vitro neurons learn and exhibit sentience when embodied in a simulated game-world. Neuron, 110(23), 3952–3969.e8. https://doi.org/10.1016/j.neuron.2022.09.001

  28. Milford, S. R., Shaw, D., & Starke, G. (2023). Playing brains: The ethical challenges posed by silicon sentience and hybrid intelligence in DishBrain. Science and Engineering Ethics, 29(6), 38. https://doi.org/10.1007/s11948-023-00457-x

  29. Edlow, B. L., Claassen, J., Schiff, N. D., & Greer, D. M. (2021). Recovery from disorders of consciousness: Mechanisms, prognosis, and emerging therapies. Nature Reviews Neurology, 17(3), 135–156. https://doi.org/10.1038/s41582-020-00428-x

  30. Sullivan, C. R. P., Olsen, S., & Widge, A. S. (2021). Deep brain stimulation for psychiatric disorders: From focal brain targets to cognitive networks. NeuroImage, 225, 117515. https://doi.org/10.1016/j.neuroimage.2020.117515

  31. Li, C. P., Wang, Y. Y., Zhou, C. W., Ding, C. Y., Teng, P., Nie, R., & Yang, S. G. (2025). Cutting-edge technologies in neural regeneration. Cell Regeneration, 14(1), 38. https://doi.org/10.1186/s13619-025-00260-y

  32. Willett, F. R., Kunz, E. M., Fan, C., Avansino, D. T., Wilson, G. H., Choi, E. Y., et al. (2023). A high-performance speech neuroprosthesis. Nature, 620(7976), 1031–1036. https://doi.org/10.1038/s41586-023-06377-x


 
 
 

Comments


bottom of page