Literature Review on Prosthetic Limbs

This chapter explains the background for this project, and identifies the main research questions and methods to bring clarity and define the projects focus, based on lessons learned from earlier efforts and new anticipations. There is extensive literature in both the medical and engineering spheres related to the aspects of design, manufacture and performance of prosthetic limbs. Literature reviewed includes books, journals, conference extracts, commercial websites and web encyclopedias, videos, patent applications, and existing reports. This chapter encases all basic aspects and concepts that review the current knowledge on this topic; it can be taken as a secondary source.

2.1 Structures and mobility of human arm

Macintosh HD:Users:sheydashahriari:Desktop:Screen Shot 2015-04-18 at 03.51.00.png

Figure 2 Human arm anatomy [Tortora and Derrickson, 2010]

The human is very complex and highly
intricate in its interlinking of all the different
physiological systems including the
musculoskeletal system, the nervous system, 
the circulatory system etc. and it is for this
reason that replicating any action inherit to the
body by an artificial means is very difficult. In
the upper limb the biological component systems that make up the bulk of the material
is dictated by its function (as with everything in
the body), which include manipulation of the
environment. The base structure to which
everything is attached and acts, as the scaffold
of the upper limb is the skeletal system.

The arm consists of three bones: 
Humerus (upper arm), radius and ulna (lower arm) and the hand consists of the scaphoid, trapezium, trapezoid, lunate, triquetrum, pisiform, hamate and capitate that constitute the wrist and base of the palm. The remaining bones are the longer hand bones of the metacarpals in the hand and the phalangeal (fingers) that are named according to the finger numbered 1-5 and location either proximal, inter or distal. The muscles control the forearm and hand. This extremely comprehensive system with over 30 muscles that give the upper limb and hands its extraordinary dexterity and function. These functions are illustrated in table 1 and the median, ulnar and radial nerves innervate these muscles.

Table 1 Muscle actions of the forearm [Cooljargon.com, 2016]

2.2 Anatomy of the brain

The brain is a sophisticated structure that manages the entire body. As a component of the central nervous system (CNS), the brain sends, receives, processes, and directs sensory information through the body [Hubel, 1995]. The brain is divided into left, and right hemispheres by a band of fibres named the corpus callosum [Woolsey, et al., 2003]. The brain consists of three major sections, with each section having precise functions. The major divisions of the brain are the forebrain (Prosencephalon), midbrain (Mesencephalon), and hindbrain (Rhombencephalon) as shown in Figure 2.4 [Nowinski, 2011]. In this project will be focusing on the forebrain only for the study purposes.

Figure 3 Anatomy of the brain

The brain is divided into left and right hemispheres. The brain is also anatomically divided into the forebrain, midbrain and hindbrain, each containing different structures [Biology, 2017]

Forebrain (Prosencephalon)

The forebrain is the part that is liable for multiple functions such as receiving and processing sensory input, thinking, recognising, understanding language, and regulating motor function. The forebrain is the biggest brain division. It contains the cerebrum, which weighs two-thirds of the brains mass as shown in Figure 4. The forebrain consists of two subsections named the telencephalon and diencephalon [Wheelock, 2013].

Figure 4 Structure of human brain

The figure shows the cerebrum, cerebellum and brainstem as a part of human brain. [Humanbrainfacts.org, 2016]

Telencephalon

A main section of the telencephalon is the cerebral cortex, which can be dissected into four lobes that are liable for processing, encoding and interpreting inputs from various sources and managing cognitive function. Sensory functions illustrated by the cerebral cortex related to the functions of hearing, touch, and vision. Cognitive functions related to thinking, perceiving, and language [Nowinski, 2011].These lobes are originated in both the right and left hemispheres of the brain. The lobes are (1) Parietal Lobes located posteriorly to the frontal lobes and over the occipital lobes. They are responsible for delivering and encoding sensory information. The somatosensory cortex is located in the parietal lobes and is necessary for processing touch sensations. (2) Frontal Lobes are located at the front area of the cerebral cortex. They are responsible for movement, decision-making, problem-solving, and planning [Hubel, 1995]. (3) Occipital Lobes Located underneath the parietal lobes, they are the primary part of visual processing. The visual inputs are directed to the parietal lobes and temporal lobes for more processing. (4) Temporal Lobes located under the frontal and parietal lobes. These lobes coordinate sensory inputs; also they support auditory perception, memory creation, and language and speech production [Woolsey, et al., 2003]. The functions of human’s brain lobes are illustrated in Figure 5.

Figure 5 Lobes of the Cerebrum [Humanbrainfacts.org, 2016]

The cerebrum consists of four major lobes: occipital, temporal, parietal and frontal. Different functional parameters are attributed to each lobe as depicted in the figure.

Diencephalon

The diencephalon is the region of the brain that transmits sensory inputs and joins the parts of the endocrine system with the nervous system [Kandel, et al., 2013]. The diencephalon regulates some functions such as motor functions. As well as, plays a significant role in sensory awareness [Hubel, 1995]. The diencephalon consists of Thalamus which is a limbic system structure that joins parts of the cerebral cortex that are associated with sensory awareness and movement with other sections of the brain and spinal cord. Hypothalamus is responsible for controlling autonomic functions of the body. Pineal Gland is in charge melatonin hormone production [Carter, 2014]

2.3 Sensory system

 Touch

“You can’t turn off touch. It never goes away, you can close your eyes and imagine what it’s like to be blind, and you can stop up your ears and imagine what it’s like to be deaf. But touch is so central and ever-present in our lives that we can’t imagine losing it” says David Linden a neurobiologist at Johns Hopkins and author of the book” Touch: The Science of Hand, Heart, and Mind.”[Stromberg, 2015].

Figure 6 Human Figures scaled to match the proportions of how touch sensors presented in brain [Stromberg, 2015]

First sense developing in human is sense of touch and is the most difficult to fathom doing without. Somatosensory Nervous System is a complex system of nerve cells that responds to changes to the surface or internal state of the body. It plays a fundamental role in creation of body map in brain.

From 5 million, touch receptor in many part of body including skin, epithelial tissues, skeletal muscles, organs, cardiovascular system, bones and joint, 3000 are located in fingertips. These receptors come in four variations, for sensing vibration, for tiny amounts of slippage, for stretching the skin and that senses the finest kinds of textures. Most of the receptors thorough human body is for pain, 200 pain receptors (nociceptors) for every square centimeter of skin, while it is only 15 receptor for pressure,  seven for temperature (thermoreceptors) which is six for cold and only one for warmth are available for same area [Stromberg, 2015].

Touch or somatosensory perception is recognised by stimulation of neural receptors in the skin [Kandel, et al., 2013]. The sensation begins from pressure applied to these receptors, named mechanoreceptors. The skin has several receptors that sense different stages of the pressure applied from mild stroking to strong, also the time of application from a concise touch to continuous [Krantz, 2012]. The sensory inputs from the receptors are conveyed through one of the three systems depending on the type of the receptor as shown in Figure 7 below: (1) dorsal-column-medial (lemniscal system) that is responsible for touch and proprioception, (2) anterolateral system responsible for pain and temperature, and (3) spinocerebellar system responsible for proprioception to the dorsal columns. After this point, the inputs are conveyed to the thalamus, which then transmits the inputs to the primary somatosensory cortex for further processing [Patestas & Gartner, 2016].

Figure 7 Sensory pathway [Clinicalgate, 2017]

The sensory inputs from the receptors are conveyed through one of the three systems:  (1) dorsalcolumn-medial (lemniscal system), (2) anterolateral system, (3) spinocerebellar system.

Chouchkov divides skin receptors into two types, unencapsulated nerve ending and encapsulated nerve endings. Encapsulated receptors terminals are of two kinds, epidermal nerve endings and dermal nerve endings. Dermal nerve endings are enveloped by Schwann cell lamella and basal lamella. There are three types of encapsulated nerve endings; Pacinian corpuscles (lamella corpuscles), Meissner’s corpuscles (tactile corpuscles) and Ruffini nerve endings (bulbous corpuscles). Pacinian corpuscles are sensitive to pressure and vibration. When pressure is exerted on an area of the skin that contains Pacinian corpuscles and bends, pressure is exerted on the central neuron in the corpuscle. The pressure leads to an outflow of Na+ from the plasma membrane, and if the pressure exceeds a certain threshold an action potential is generated. Meissner’s corpuscles are sensitive to light touch, due to being rapidly activated and deactivated, and vibration. Ruffini nerve endings are sensitive to skin stretching, sustained pressure and temperature. They respond to sustained pressure by showing little adaptation so they do not reduce in firing even as the pressure duration increases. They can detect angles changes up to a specificity of 2.75 degrees and also act as thermo receptors [Terjung and Darin-Smith, 2011].

Visual

Sight, or vision, is the ability of the eyes to perceive images of visible light. Light goes into the eye through the pupil and is concentrated through the lens onto the retina on the back of the eye [Hubel, 1995]. There are two types of photoreceptors, called cones and rods, perceive this light and produce nerve impulses which are directed to the brain through the optic nerve [Lee, et al., 1998]. Once that visual input has been sent, it is then conveyed to different brain areas.  The ending point of the optic nerve named the lateral geniculate nucleus, found in the thalamus near the centre of the brain. The visual input from there is then directed to the primary visual cortex that is in the occipital lobe. When that input is in the primary visual cortex, the brain starts to rebuild that image. Visual input is also directed to the secondary visual cortex for further processing [Tong, 2003].

Figure 8 Visual pathway

Retinal signals move to the optic nerve (ON) and the lateral geniculate nucleus (LGN), to the visual cortex and cortical cells [Helm, 2017].

Auditory

Audition is the human feeling of hearing is credited to the sound-related framework, which utilises the ear to gather, increase, and transduce sound waves into electrical impulses that enable the cerebrum to realise and confine the sound [Scott & Johnsrude, 2003]. Inside the ear, the mechanoreceptors consist of organs that perceive the sound’s vibrations. Firstly, the Sound is moving through the ear canal and vibrates the drum as shown in Figure 2.9. Next, the vibrations are displaced to ossicles bones in the centre of the middle ear named consecutively the malleus, incus and stapes which vibrate the liquid in the internal ear. This liquid filled organ, identified as the cochlea, covers little hair cells that yield to electrical signals when disfigured. The signals are moving through the auditory nerve straightforwardly to the cerebrum, which deciphers these signals to sound [Delano & Elgoyhen, 2016]. People can regularly recognise sounds inside a scope of the frequency of 19 – 20,000 Hertz.

Figure 9 Ear anatomy

Anatomy of the ear that shows the hearing pathway (Pinterest, 2017).

2.3 Phantom syndrome

Phantom Limb Syndrome (PLS) is a condition in which sensations can be felt from limb that has been amputated. It is reported that 60-80% of people will experiencing phantom limb sensations after amputation [Sherman, et al., 1984]. The first indication of these clinical facts was by an Ambrose Paré in mid 16s who was involved in the practice of surgical amputation as well as the design of limb prostheses [Thurston, 2007]. The term “phantom” may imply that the painful symptoms are illusory. Sensations can vary from pain, in form of burning, stabbing or crunching sensations, to pleasure and even movement such as waving, shaking hands or clenching of fists [Subedi & Grossberg, 2011]. Sometimes, an amputee will experience a sensation called telescoping. This is the feeling that the phantom limb is gradually shortening over time. The symptoms of the syndrome occur immediately following amputation in 75% of cases or usually within at least a year of amputation [Neil, 2015]. Wolf (2011) has defined Phantom Limb pain (PLP) as a pain resulted from the elimination or missing of sensory nerve by injuring the sensory nerve fibres after amputation or differentiation. Put it simply, PLS is caused by the brain still receiving messages from the nerves that once inhabited the missing limb [Flor, 2002]. Until the brain readjusts and re-wires to account for the physiological change the sensation will be felt in 90% of cases.

The prevailing assumption for the reason of PLP was an annoyance in the nerve endings named “neuromas”. After a limb is cut off, several nerve endings are concluded at the remaining limb. These nerve endings can be irritated and were believed to refer abnormal signs to the brain. These signs were thought to be understood by the brain as hurt [Vaso, et al., 2014]. Actions founded on this assumption were mostly disappointments. Where neurosurgeons would do another elimination, to shorten the stump, with the optimism of eliminating the irritated nerve endings and producing impermanent help from the PLP. However, then again, the PLP increased, because of the combined sensation of both the initial PLP, in addition to the new phantom stump [Ramchandran & Hirstein, 1998].

When looking more deeply into the cause and underlying mechanisms of PLS, brain reorganization and synaptic plasticity emerge as the key foundation. It is known from animal studies by National Institutes of Health (NIH), let by Tim Pons, that there is plasticity in the somatosensory cortex [Pons, et al., 1991]. Two months after monkeys had their middle finger amputated the area of cortex dedicated to the finger began to respond to tactile stimulation of the finger adjacent to the middle one [Merzenich et al, 1984]. Not only can amputees experience spontaneous phantom limb Sensation they can also experience feeling in the phantom limb when other body areas are touched. For example, MEG studies showed that due to two different cortical reorganization sensory information from the facial nerves can be sent to two different cortical areas: the original face area and the area that preciously received information from the arm. These effects can even be modality specific, i.e. vibration on the face leads to perception of vibration on the phantom limb.

.

Figure 10 PLS is connected directly to the brain [Aalborg University, 2013]

The mechanism underlying the neural reorganization that leads to these kinds of experiences may be the classical Hebbian synaptic plasticity that involves the activation of NMDA receptors at neuron synapses. Another explanation for feeling in the limb when the face touched may be that the tactile and proprioceptive input from the face and tissues proximal to the stump takes over the brain area- so spontaneous discharges from these tissues would get misrepresented as arising from the missing limb (Figure 11) [Ramachandran & Hirstein, 1998].

Figure 11 The effect of limb amputation on the somatosensory homunculus [Bethesda Spine Institute, 2017]

Treatment of phantom limb pain after amputation is quite challenging, but not impossible. There are a number of tested treatments for PLS some more successful than others. Treatments can be classified as medical, non‐medical and surgical where medical treatment is the most effective [Bosanqueta, et al., 2015]. A successful treatment is the well-known mirror box illusion treatment. It allows PLS patients to feel relief from the pain by superimposing their functioning limb onto their lost limb using the mirror and a space to hide the missing limb. Medicinal treatments include use of antidepressants, anticonvulsants, antipsychotics, opioids and others. Electrical nerve stimulation treatments also exist such as use of transcutaneous electrical nerve stimulation and transcranial magnetic stimulation [L. Nikolajsen, 2001]. PLS can disappear over time using these treatments but in some rare cases it never vanished completely.

2.4 Learning mechanism

Image result for brain learning mechanism

Figure 12 Glutamate receptors: structure and function [Kritis et al., 2015]

Classical conditioning, also known as reinforcement learning, is a learning mechanism. Reinforcement learning is dependent on assessing the reward value of stimuli in environment. Measuring the dopamine release triggered by the stimuli does this assessment. As more dopamine is released. The feeling of reward in the individual increases and the stimuli or behavior that led to this rewarding feeling gains incentive salience – the individual will strive towards replicating this feeling. Incentive salience is achieved through the action of NMDA receptors and calcium, predominantly in the hippocampus. The altering of neuronal networking due to the action of NMDA receptors is known as synaptic plasticity and is the neural basis for learning in humans.

Classical conditioning is best explained through example. Thorndike (1898) demonstrated the effect of conditioning using ‘puzzle boxes’. Dogs, cats and chicks were put in a box and when they were hungry had to pull down a loop of wire, depress and leaver and step on a platform to obtain food. When the animals had perfectly associated the sequence of actions to the consequences, they were saying to be conditioned. Classical conditioning is therefore said to be learning process in which an innate response to an important stimulus, in this case the food, becomes linked to a previously neutral stimulus, in this case the sequence of actions, as consequence of repeatedly being exposed to the neutral stimulus and the important one at the same time [McLeod, 2007]. A person learns the value of stimuli or action through experience and achieves this by testing and updating predictions- known as delta rule updating. The basal ganglia in the midbrain plat a role in classical conditioning. Neurons in the basal ganglia project to the brain stem motor areas and the thalamocortical circuits making the basal ganglia capable of producing body movement based on expected reward. The basal ganglia plays role in guiding eye movement towards locations where rewards is available. There is a bias in excitability between the superior colliculi in a way that means saccade to the to-be-rewarded position occurs more quickly [Hikosaka et al, 2006]. It therefore facilitates the actions that lead to reward- the process at the heart of classical conditioning.

From this, it is clear that learning and reward mechanisms are intertwined. This then points to the role of dopamine in learning, as it is the neurotransmitter that produces the rewarding feeling that is crucial to reinforcement learning. Certain behaviors lead to reward in the form of dopaminergic response, which leads to an experience of pleasure. This leads to conditioning learning where the environment cues associated with that behavior would be associated with the pleasure. Dopamine its role in the ventral tegmental area (VTA), within the mesolimbic dopamine pathway in the midbrain when we encounter rewarding stimuli the VTA fires and produce the dopaminergic response in the nucleus accumbens. How strongly that happens correlates with the feeling of euphoria and the reinforcement potential for that stimulus. The VTA projects to the nucleus accumbens via distinct pathway: the meso-ventral medial pathway (the shell) and the meso-ventral pathway (the core). The core is said to be more involved in learning than the shell as it responds to stimuli whether positive or negative [Bassareo and Di Chiara, 1999]. Once conditioning has taken place there will be a spike in dopaminergic responding when the cue that has been associated with reward appears. This is due to synaptic strength- so the increase (the spike) is due to development of synapses and the structure of the synapse changing in a way that it becomes more efficient.

Synaptic plasticity is the process that allows learning and classical conditioning to occur. Long-term potentiation (LTP) is a form of synaptic plasticity that involves long-lasting enhancement in the transmission of signals between two neurons, resulting from repeated simultaneous activation of the neurons. LTP is dependent on Hebb’s rule being fulfilled. Hebb’s rule is that LTP will only occur if a synapse is active at the same time as postsynaptic cell –if this is the case the synapse will be strengthened. This process needs to involve repeated simultaneous activation which results in structural and chemical changes in the synapse, that lead to heightened post-synaptic potentials. The NMDA receptor on the membrane of these neurons is key for learning and memory. Experimental evidence for the role of NMDA receptor in learning and memory comes from the finding that ketamine, an NMDA receptor antagonist, impairs explicit memory [Malhotra et al, 1996]. Furthermore, Morris et al (1986) found that blocking the NMDA receptors blocks LTP. Rats were trained in a water maze after being administered an NMDA antagonist (AP5) or a placebo. Those who had been given the AP5 displayed disrupted learning and reduced LTP, so were unable to navigate the maze.

The action of the NMDA receptor is intertwined with the action of calcium- when an NMDA receptor fires there is intracellular cascades that potentiate the activities of calcium. The process is as follows:  when there is stimulation, ions (glutamate) flow into the postsynaptic neuron through AMPA channels, but NMDA channels are blocked by magnesium. It is upon repeated stimulation, when the cell becomes depolarized, that the magnesium is removed and the ions can flow in through NMDA receptors. The flow of ions through the NMDA receptors activates it, which has knocked on effects for calcium, e.g. cascades begin. Calcium then has three effects, 1. It activates CAM kinase, which affects AMPA receptors by phosphorylating the ones already present which increases their conductance to sodium ions. 2. It increases release of neurotransmitter from presynaptic cell via retrograde signals e.g. nitrous oxide. 3. It makes more AMPA receptors available. Due to more AMPA receptors the response to a stimulus of given strength will be stronger than it was before the NMDA receptors were activated – so the synapse is enhanced. This physiological change is one of the mechanisms underlying LTP. LTD is opposite of this and occur when there is a lack of depolarization and this allows synapse to stay efficient. Is stimulation is strong enough and continuous enough you can end up with another synapse. When AMPA receptors increases in number the synapse can grow and turn into two. LTP, triggered by reinforcement learning is therefore the mechanism that underlies learning.

To summarize, classical conditioning theory involves learning new behavior via the process of association [McLeod, 2008]. The unconditioned stimulus (UCS) is the object or event that originally produces the natural response or unconditioned response (UCR). The neutral stimulus (NS) is a new stimulus that does not produce a response. Once the neutral stimulus has become associated with the unconditioned stimulus, it becomes a conditioned stimulus (CS). The conditioned response (CR) is the response to the conditioned stimulus.

Figure 13 Classical conditioning flow chart

2.5 Brain manipulation

The brain can be manipulated temporarily by illusions, which can have effect in a wide variety of modalities. The brain can also be manipulated in more long lasting ways, due to environmental influences on brain maturation.

Image result for muller-lyer illusion

Figure 14 Muller-Lyre illusion

There exist many effective visual illusions that will manipulate the brain for a short period of time. There also exist auditory illusions, which again, manipulate the brain temporarily. The Muller-Lyre illusion is a visual illusion that manipulates the brain into believing one line is longer than another, when they are in fact the same length. The illusion works due to smaller lines at the end of each line that either protrudes outwards from the end of the line or inwards. The lines going inwards create the illusion that line is shorter. The ability of the shorter lines to interfere with the perception of length of the longer line is one of the features of geometric patterns that are known as the ‘confusion effect’ [Sekuler and Erlebacher, 1971]. A different type of illusion that relies on the integration of visual and auditory stimuli is known as the McGurk effect. This is where an ambiguous phoneme sound is played with a video of a mouth movement simultaneously. The mouth movement will change half way through the video but the sound remains the same. For example an ambiguous phoneme that lies between the ‘fah’ and ‘bah’ sounds will be played. For half of the video the mouth will move in a way that would imply the sound was ‘fah’. For the second half of the video it will move in a way that would imply the sound was ‘bah’. The illusion arises as the perception of the phoneme changes as the mouth movement does. This illusion shows the power of visual information to manipulate the perception of sound. The brain areas involved in creating the illusion have been investigated using fMRI. From this was found that there was a positive correlation between the strength of the McGurk effect and activation in the left occipito-temporal junction, an area, which is associated with processing visual motion. This suggests that auditory information modulates visual processing to affect perception [Jones and Callan, 2003]. Another study used fMRI data to find the location of auditory-visual speech processing in the superior temporal sulcus (STS), as it is an area believed to be involved in creation of the McGurk effect. They then used transcranial magnetic stimulation (TMS) to disrupt this location within the STS and found that this led to less reports form participants of experiencing the McGurk effect. The authors concluded that this meant the STS was involved in the McGurk effect and the auditory-visual integration of speech [Beauchamp, Nath and Pasalar, 2010].

Another way the brain can be manipulated in the short term is shown through the phonemic restoration effect. This is where a missing phoneme from a word reported to be heard by a listener. Testing the effect involves playing a recording of a spoken word where one of the letters or phonemes is masked by a sound. In many cases the listeners will claim to have heard the missing sound. It is thought that this is the mechanism that allows perception of words in loud environments – the context is sometimes able to provide enough information and manipulate the brain into hearing a sound that doesn’t exist [Warren, 1970].

A more long-term manipulation of the brain from sound stimuli is the loss of phoneme detection that occurs in the infant years. Very young infants’ English infants have been known to detect differences in phoneme sounds in Japanese. These same infants tested when they neared one year old could no longer differentiate the Japanese phonemes but could distinguish differences in their native language – English. The mechanism behind this process is most likely long-term depression (LTD) in synaptic plasticity. The neurones required to differentiate the foreign phonemes do not get used when the environment the infant is maturing in does not involve those sounds. This leads to LTD and the loss of function in those neurones. A similar process is involved in the perception of faces. Very young infants find it as easy to distinguish monkey faces as human faces. This is an ability that is lost with age. This is again most likely due to LTD and the loss of the brain areas needed to distinguish monkey faces, as that is not a skill used or practiced by the infants. These processes are ways in which the brain can be manipulated by the environment, the effects of which will last many years or indefinitely.

Perceptual learning is the way in which the brain learns to distinguish stimuli it is more familiar with due to practice. This is the mechanism that underlies the brain being manipulated by the environment. Perceptual learning usually relies on the stimulus of concern being attended to. This concept can be applied to the perception of colour. Linguistic relativity plays a role in this. As children learn colour they will begin to learn the boundaries that are assigned to different spectrums of colour – something that depends on the word use specific to each language. This can affect the perceptual categorisation of different colours and consequently may subtly effect their perception of colour. This is a long-term manipulation of the brain in the visual modality.

2.7 Body ownership illusion

Body transfer illusion, “body proprietorship” or “body ownership” is referred to it in the literature reviews as the deception of owning a body part or a whole-body other than one’s own (Kammers, et al., 2006). It can be instigated tentatively by controlling the visual input of the individual and furthermore providing visual and sensory signs which connect to the individual’s body (Petkova & Ehrsson, 2008).

In general, there are two ways involved in perception and sensation (Bottom-up processing) that means the sensory processing data as it is coming in. Whereas (Top-down processing), the perception was determined and motivated by perception (Bruce, et al., 2003). For the illusion to happen, bottom-up processing, for example, the contribution of visual data, must prevail top-down the awareness that the specific body part does not be in the right place.

The rubber hand illusion (RHI) is the ability of an individual to experience a tactile sensation when an inanimate rubber hand is touched. It arises when one of their hands is hidden from view, for example under a cloth, and a rubber hand is placed in front of them in a way that mimics the position of their real hand. The real hand and rubber hand are then stroked simultaneously for as long as it takes for the individual to begin to feel the strokes on the rubber hand in the same way as strokes on their real hand – at this point, the illusion has been achieved. The mirror hand illusion is similar in concept to the RHI in that they both involve a transition in the sense of ownership of a body part. It is most commonly used as a therapy for phantom limb pain or as a way of restoring action in limbs after stroke. In the mirror illusion, a mirror is placed in front of the individual at a perpendicular angle to them. Their injured limb in placed out of sight while their functioning limb is placed in front of the mirror. If the angle is correct the patient will be able to look into the mirror and see what appears to be their injured limb looking fully functional. After a period of time, they may experience a relief from pain that is induced by the reflection of their arm being intact. In some cases touching the intact limb can give the visual and perceptual illusion of the injured limb being touched – a similar concept to the RHI.

 

Experiments involving the RHI can vary in terms of the way the participant is conditioned. One study varied if the visual and tactile stimuli were exhibited in synchrony or asynchrony. The study also varied the position of the rubber hand in relation to the real hand. For example, the rubber hand was said to be congruent with the real hand when they were held in the same orientation e.g. palm facing downwards [Tsakiris and Haggard, 2005]. Experiments involving the RHI can also differ on the methods used to examine participants. Rohde, DiLuca and Ernst (2011) similarly varied if the stimuli were presented in synchrony or asynchrony, but also measured participants using a self-report questionnaire to gain an insight into subjective ratings. Perhaps more in depth and neurobiological approach was taken by Ehrsson, Spence and Passingham (2004). While also varying synchrony and orientation, they measured the brain activity during the illusion using functional magnetic resonance imaging (fMRI). This allowed them to observe specific brain areas involved in creating the experience of the illusion.

Tsakinis and Haggard concluded from their experiments that the RHI is the result of purely bottom-up mechanisms as the illusion was only found to occur when the rubber and real hands were incongruent positions and the visual and tactile stimuli were experienced in synchrony. They explained their findings using evidence from Grazioano et al (2000). Graziano and colleagues found that there were neurones in area five of the parietal lobe that responded to the RHI when the real and rubber hands had congruent positions. The findings from Rohde, DiLuca and Ernst’s experiment was that the illusion correlated with proprioceptive drift (change of perceived finger location towards the rubber hand), and that the drift occurred in both the synchronous and asynchronous conditions. Interestingly, however, the feelings of ownership of the rubber hand only occurred in the synchronous conditions, highlighted by the questionnaires. From this, it is clear that the feelings of ownership and proprioceptive drift are dissociated so one cannot be used to infer the other. Furthermore, different mechanisms of multisensory integration are responsible for proprioceptive drift and sense of ownership. Their study also highlights the importance of questionnaires when gaining insight into the mechanisms at play in this illusion. The fMRI data provided by Ehrsson, Spence and Passingham showed three main neural mechanisms involved in creating the RHI. Firstly, multisensory integration (integrating the various forms of sensory information that the participant is being provided) occurs in the parietocerebellar regions. Specifically, the ventral premotor cortex plays a key role as it is anatomically connected to the visual and somatosensory areas and frontal motor areas. Secondly, the recalibration needed for proprioceptive drift to occur happens in the reaching circuits. Lastly, the self-attribution of the rubber hand, at the heart of the illusion, occurs in the bilateral premotor cortex, as shown by those feeling the illusion most strongly also showed the strongest BOLD response in that region.

Image result for rubber hand illusion

Figure 15 the paradigm of rubber hand illusion (RHI) [Mind Tricks Gallery, 2018]

Many studies have aimed to describe the applications of the mirror hand illusion and provide explanations for the mechanisms involved. In a study outlining three case studies of patients with limb pain treated by the mirror illusion, an explanation for its effectiveness is proposed. In these cases, the illusion induced the perception of being touched in the injured limb. Rosén and Lundborg, (2005) claim that the illusion of being touched, which is mediated through the visual system, may be based on neurones in the somatosensory cortex, which are activated by tactile stimulation of the hand as well as visual observation of this tactile stimulation. This proposal is based on the idea that there are mirror neurones in the premotor cortex, which are activated, by hand actions and observation of these hand actions. Another explanation is, proposed by Giummarra et al (2010). They suggest a model in which observation of the hand movement in the mirror triggers body representations through activation of the posterior parietal cortex and temporoparietal junction. They claim activity in these regions heightens awareness of peripersonal space and increases tactile sensitivity which enhances the perception of illusory touch and embodiment. Another explanation for the illusion comes from a study using fMRI. The illusion was carried out as normal using a mirror and then repeated in the same way except the mirror was removed. From the fMRI data, two areas were found to only light up in the mirror present condition. These were the right superior temporal gyrus and the right superior occipital gyrus [Matthys et al, 2009].

2.8 Blind vs blindfolded

Studies have confirmed that the visual cortex in blind people has some deficiencies. Significantly, in congenitally blind individual, however in late blindness (after puberty) and blindfolded sighted individuals the situation is different [Petkova, et al., 2012]. Overall, when the brain receives no visual information, this leads not only to variations in the visual system but also to the physical and functional reorganisation of brain areas that help other sensory areas that arbitrate the integration of information across these sensory areas [Kupers, et al., 2011].

Older studies for brain anatomy assumed that each area of the human brain was responsible only for specific tasks. So, each area can process a precise type of information without a shift in its function. Whereas, in the last decade, scientists found that the brain can modify form new and different neural connections performing in plasticity manner. For instance, the occipital cortex is assumed to be used primarily for visual processing. In the case of blind people, however, this area is inactive. An experiment that examines subjects who are congenitally blind and late-blind was conducted by Burton. et al. (2001) to test if the brain can rewire and use the inactive neurons in the visual system for other sensory [Burton, et al., 2001]. By using functional magnetic resonance imaging (fMRI), the scientists measured cerebral blood flow while the subjects read Braille words. The fMRI shows that major activity sensed while the subjects read words happened in the visual cortex, notably, in the completely blind, the subjects had extra activity in the visual cortex than the late-blind subjects [Coxa & Savoya, 2002].

2.9 Telemetry biopotential data

In new medicine era, many imaging procedures for the human body are being employed. Where the electro-biological measurements involve the group of Electrocardiography (ECG) for heart activity, electromyography (EMG) for muscles contractions. Electroencephalography (EEG) and magnetoencephalography (MEG) for the brain activities, Electrogastrography (EGG) for stomach, and Electrooptigraphy (EOG) for eye movement [Taplan, 2002]. The different electro-biological measurements are exhibited in Figure below:

Figure 16 Different Electro-Biological Measurements Signals (a) Electroencephalography (EEG), (b) Electrocardiography (ECG) and (C) Electromyography (EMG) [Correa & Leber, 2011]

Electroencephalogram

Figure 17 Brain recording domains [Leuthardt et al., 2009]

Electroencephalogram (EEG) is a test to measure and record electrical activity of the brain using electrodes attached to scalp, it was first discovered by German scientist Hans Berger 80 years ago.

Measuring brain signals can be categorize into two groups of invasive and non-invasive. An invasive approach like electrocorticographic (ECoG) requires physical implants of electrodes in humans or animals brain, and has advantage of providing measures single neurons or very local field potentials. While non-invasive approach, for instance, EEG or PET scan, provide useable measurements they are unable to provide image from inside the brain and to observe what happens [Kropotov, 2009].

Pair of conductive electrodes made of silver, for example, is used to record brain activity from the scalp. The difference in voltage between the electrodes is measured, and since the signal is weak (30-100μV) it has to be amplified. Current occurs when neurons communicate. The simplest event is called action potential, and it is a discharge caused by fast opening and closing of Na+ and K+ ion channels in the neuron membrane. If the membrane depolarize to some threshold, the neuron will “fire”. The resulting trace of these discharges over time represents the brain activity. The EEG is one of the few techniques available that has such high temporal resolution and it can detect changes in electrical activity in the brain on a millisecond-level.

The most used application of EEG is to search for brain damage and various disorders like epilepsy or declaring brain dead.  Studying EEG signals and how they linked to different mental states, directed to number of alternative methods to manipulate these waves. For example, music with specific Hertz can help to become more relaxed, focused and smarter [Brain Entertainment, n.d.]. Investigation the relation between medication and EEG is another area that has been interested a lot of researcher. According to Braverman (1990) using antidepressants, morphine, heroin and marijuana, are addictive because they increase the alpha activity. Alcohol is quick remedy to become relaxed as it increases the amplitude of slow wave frequencies and decrease the fast wave and has excess of occurring beta waves. Further, Braverman talk about ”how brain waves symbolizes the various parts of our consciousness, and that if we get the knowledge and treatment to change them, we can get closer to get our very balanced brain waves, or happiness” in his article in two ways, event related potentials (ERP) or Neurofeedback training.

Electromyography

EMG is the electrical recording of muscle activity; also diagnostic procedure to assess the health of muscles and the nerve cells that control them.it involves electrodes detecting changes in the muscle. The signal is detected by means of electrodes that are either transcutaneous on the surface, or implanted. In the model that is being designed, electrodes and sensors make electrical contact with the skin, with two of them at a target muscle, and the third closer to the relevant bone [Knipe, n.d.]. As surface EMG is non-invasive, it is the most common, and important, technique for programming a prosthetic. Electrodes are made of stainless steel and sense the activities of the muscle. The activity of the muscle is low power, so isolation and differential amplifiers are required to increase the power and isolate the EMG from any other electromagnetic “phenomena” that may be sensed by the electrodes. The EMG is decoded to produce a voltage to correspond to subsequent muscle activity. The process therefore involves the muscle fiber responses stimulating the electrode sensors; the resulting signals are decoded and then processed. One disadvantage is that the strength of the signals depends on the voltage output and on the muscle contractions.

Following the amputation or removal of a limb, the neuromuscular system supplying motor function to the limb remains intact. This residual nerve and muscle supply can be utilized to provide input to the prosthesis [Camdir, 2015]. However, different limbs have a different pattern of nerve supply and muscle control, so the origin and pattern of control of the prosthetic will depend on implementation and timing. Only relatively recently has it been possible to harness EMG signals for functioning residual muscles to drive prosthetic limb function [Sudarsan and Sekaran, 2012].

Cite This Work

To export a reference to this article please select a referencing stye below:

Related Services

study
http://au.au.freedissertation.com

Leave a Reply