Create an Account CourseStreet Log in  Connect with Facebook
Home Blog
 

NRSC 2100 Blog

A GROUP WEBLOG FOR NRSC 2100 SUMMER NRSC 2100

« return

December 6, 2011

Mirror Neurons Allow the Blind to "See!"


By watching the actions of others and comprehending the intent conveyed through the acts, individuals constantly learn from their social interactions and adapt accordingly. In humans a specific class of neurons, known as mirror neurons, comprise a system believed to be a factor in the understanding of actions and intentions, as well as in language procurement, learning by imitation, and development of empathy. It has been well established that the mirror system shows greater activation during observation of familiar movements, and can even be activated through listening to familiar sounds alone without any accompanying visual cues. While this clearly indicates that visual perception is not a requisite for activation of mirror neurons, it does not preclude the possibility that activation is ultimately due to visually based mental imagery that has been triggered by the auditory stimuli. So is functional sight vital to the activation of mirror neurons and thus to the ability to learn through imitation and interact successfully in the external world? Or can the visually impaired still retain use of their mirror neuron networks, and thereby learn from and effectively "see" the actions of others? Emiliano Ricciardi of the University of Pisa, in conjunction with his colleagues, designed a novel experiment in order to ascertain the correct answer.

The study consisted of eight blind participants and fourteen participants with normal vision. Of the eight blind participants, seven were congenitally blind and one lost vision at the age of two, yet had no memory of any visual experience. Each subject was asked to listen to twenty aural descriptions of hand-motor actions and ten environmental sound samples, such as of a rainstorm. The participants were also asked to perform motor pantomimes of specific actions upon hearing a spoken explanation of the movements and the items involved. Upon completion of the task, participants were asked to identify the sounds they heard, and rate their competency in performing the associated actions. The sighted participants were additionally asked to view action or environmental images, as well as pantomime action words that appeared on the screen. During all phases, every participant was imaged using fMRI technology, in order to observe changing brain activity throughout the tasks. Both groups of participants demonstrated equal capabilities in correct execution of the given motor movements and recognition of the various auditory stimuli.

The imaging results from Ricciardi's study revealed an overlap of brain activation between specific areas when participants listened to action sounds and when the actions were physically performed. This overlap isolated a left-lateralized mirror neuron cortical network that was comprised of premotor, temporal, and parietal areas in both blind and sighted subjects. Auditory presentation of familiar actions and movements caused greater overall activation of the mirror system network in both subject groups, compared to the activation elicited by unfamiliar sounds/actions.

These results were incredibly significant as they provided evidence that the mirror system can normally develop without vision through processing of non-visual information concerning actions. Individuals who were born sightless, and thus were never exposed to any visual stimuli, still possessed functioning motor-based mirror neuron networks that were triggered by auditory cues describing actions. Thus, blind individuals can in essence "see" the actions of others, in the sense that cortical activation of identical areas occurs in them as does in sighted people in response to observation/interpretation of action. This implies that mirror neuron networks are based upon abstract and supramodal sensory portrayals of actions, thereby allowing sightless individuals to understand external actions and learn through imitation of others as successfully as those without impaired vision. Ricciardi has achieved the seemingly impossible and presented the blind with the gift of sight... at least from a cortical perspective.

Original article can be found at: http://www.jneurosci.org/content/29/31/9719.full?sid=948c8b50-3d40-470f-b0dd-ca5c939b2ec0
Posted by      Anjali C. at 11:50 PM MST
displaying most recent comments (8 ommitted) | Comments (11)
  Michael Hussey  says:
I didn‚??t know much about medical field or medical terms I am a student in computer science after reading the whole article I understand something is that with some surgery that is called Mirror Neurons allows blind to see and they do some activity on that also it is a very informative topic I am thinking to write my assignment online on this topic I know it is out of the field but very interesting topic to learn and write.
Posted on Thu, 30 May 2019 4:30 AM MDT by Michael H.
  zoe moore  says:
A mirror neuron is a neuron that fires both when an animal acts and when the animal observes the same action performed by another. Buy an Assignment Online
Posted on Wed, 24 Jul 2019 1:42 AM MDT by zoe m.
  James Cameron  says:
The mirror neurons system is active in autistic people. It's just emotional things, like for example facial expressions. Are you facing Nations Recovery Center Debt Collection Harassment? Give us a shout today & get rid of phone harassment from this agency etc.
Posted on Wed, 24 Jul 2019 4:11 AM MDT by James C.

Want to Ace the Final? Eat Protein, Not Sugar


We've all heard the saying "you are what you eat," but new research suggests "you think what you eat." A subset of neurons in the hypothalamus called orexin/hypocretin (orx/hcrt) cells are known to regulate energy, reward, and wakefulness and they are stimulated by protein and inhibited by sugar. So eat a protein rich mean before the final to ensure you stay alert and energized!

There are 20 different amino acids coded for by the human genome. They form the basis of all our proteins. In the November 17th issue of Neuron, a study lead by Dr. Denis Burdakov from the Department of Pharmacology at Cambridge shows that amino acids stimulate orx/hcrt cells while glucose depresses them. Interestingly, fatty acids did not affect the firing of orx/hcrt cells. People with defects in these cells have narcolepsy and weight gain, highlighting their importance in wakefulness and healthy metabolism.

The study used in vitro and in vivo techniques to test amino acid's effect on orx/hcrt neurons in mice. The mice were transgenic for orx/hcrt-eGFP, meaning eGFP was expressed only in orx/hcrt cells so they could be easily identified. The in vitro work was done using whole-cell patch-clamp experiments on brain slices, where amino acids clearly depolarized the cells, even in the presence of blockers of ionotropic glutamate, GABA, and glycine receptors. Live rats were force fed the same amino acid mixtures tested in the whole cell recordings and then researchers used an immunocytochemical assay to determine c-Fos expression, a marker of active cells. This proved that amino acids eaten do reach the lateral hypothalamus to stimulate orx/hcrt cells.

Next the researchers wanted to investigate the effects of combining glucose and protein into one experiment. They had previously shown that glucose depresses the same orx/hcrt cells that amino acids excite. In fact the glucose showed a higher inhibition level on its own than the amino acids had shown excitation. Therefore expected results from mixing glucose and amino acids were either to cancel each other or produce a slight inhibition due to glucose. Unexpectedly, the mixture depolarized (excited) the cell. Placing glucose alone on the same cell produced the characteristic hyperpolarization, so these results were puzzling. They used pyruvate, a breakdown product of amino acids, to examine if amino acids were changing the cell's response to glucose. Pyruvate did reduce the glucose response in a dose dependent manner, indicating that amino acids suppress glucose response in orx/hcrt cells resulting in depolarization/excitation instead of inhibition.

It's 3 pm and you just ate a bag of gummy bears and drank a soda. An hour later you begin to get sleepy. Why? The glucose is inhibiting orx/hcrt cells, causing a depression in energy and wakefulness. The best replacement for your sugary afternoon snack is something with protein, since amino acids can competitively compete with glucose, mediating a wakeful and energy producing stimulation. Diet is inexorably tied to brain function, a fact we should take heed of when studying for a big exam or trying to stay awake during lecture.


Karnani, M., Apergis-Schoute, J., Adamantidis, A., Jensen, L., Lecea, L., Fugger, L., Burdakov D. ( 2011). Activation of Central Orexin/Hypocretin Neurons by Dietary Amino Acids. Neuron, 73(4): 616-629.
Edited by      Amanda W. at 9:06 AM MST
displaying most recent comments (1 ommitted) | Comments (4)
  Juvanta Luiz  says:
I really love this. click
Posted on Mon, 29 Apr 2019 10:21 PM MDT by Juvanta L.
  Jessica Hart  says:
Being an nursing essay writing services provider. There are hundreds of students have contacted me to getting their assignments done on this topic
Posted on Sat, 1 Jun 2019 3:15 AM MDT by Jessica H.
  Rachel Forbes  says:
This is the most informative blog on the educational circumstances, how to take good grades in the final exams by taking caring of your health too. Students can get help in their assignments from HND assignment service anytime they need.
Posted on Thu, 27 Jun 2019 3:43 AM MDT by Rachel F.

December 5, 2011

Baseball IQ


The use of performance enhancing drugs by MLB athletes has, for a long time, been the most distinct way in which players attempt to gain an advantage. By combining these drugs with physical training methods players have reached new levels of strength and endurance.

However, pure physical strength is not the only attribute that players must possess to succeed at the sport. Hand-eye coordination is key; you must have the exceptional ability to physically react to a visual stimulus in a matter of seconds.

What this means is that the best players owe their success to the ability of their brains as much as they do to their bodies. The brain is responsible for identifying a stimulus (baseball), deciding whether it will be in or out of the strike zone, and then sending a message to the muscles to swing/not swing. Considering that pitchers are reaching speeds of 95 mph consistently, it is extremely remarkable that this process can occur with success in such a short time frame.

Researchers have recently begun to study how the brain takes and decodes this type of visual in formation. In a paper titled An Oculomotor Decision Process Revealed by Functional Magnetic Resonance Imaging used fMRI to determine the brain structures involved in such a task.

The researchers had subjects do a go/no go task while their brains were scanned. They had the subjects to judge whether a moving target would cross a particular ‚??strike zone‚??. The target was viewed head on, similar to what a pitch would look like from a batter‚??s box. In the subjects field of vision was a circular target, which was the eventual strike zone. The task was simple. They had two buttons: one representing a target ‚??hit‚?? and one representing a target ‚??miss‚??. The subjects were asked to judge the outcome as quickly as possible and brain imaging was recorded as patients watched the entire flight of the ball.

They found the most activity in the supplementary eye field, frontal eye fields, and bilateral superior lateral lobule. The right prefrontal ventrolateral prefrontal cortex was especially active during go trials during a target hit as apposed to a target miss. They believe that this suggests that theses areas are involved in rule-based decision making. When, based off the trajectory of the object, the brain decides it will hit the target, a cascade of neurons fire resulting in the patients choosing a ‚??hit‚??. The same goes for a miss.

The interesting application of this in sports is that these areas could potentially be improved. Quicker neuron firing rates could result in quicker determination of go vs. no go, or in the case of baseball swing vs. no swing. It is entirely possible that the controversial performance enhancer of the future won‚??t have anything to do with brawn but with brains.

If this type of enhancement does play out it will be interesting to see the reaction of the fans and the media. Will this be considered to be on the same level of cheating as steroids? It is hard to say, but it seems to be an inevitable problem that baseball will have to face.
Posted by      Sean F. at 11:29 PM MST
  Jessica Hart  says:
thanks indeed for sharing such a wonderful article full of wisdom
Posted on Tue, 9 Apr 2019 5:59 AM MDT by Jessica H.
  Jessica Hart  says:
These article on Aggression are worth-reading.
Posted on Tue, 9 Apr 2019 6:01 AM MDT by Jessica H.
  monica jesvina  says:
i love this basically this is the area of the game where players. this is famous that baseball IQ is very high becuase they are always present mind and think advanced the reason is only their fous is not an only ball but all the same level of focus to their envoirnment, the players. Artist Dave London is an award-wining of cartoonist and i go on that award with my best custom essay writing service team employees because they love baseball to play or to see and they are a fan of Dave.
Posted on Thu, 16 May 2019 3:22 AM MDT by monica j.

Neuro-Anthropology: A step backward or forwards?


Neuroscience is an incredibly broad subject. The world that we live in is touched by the brain in so many ways; it is the brain that allows us to experience and investigate the world itself. The new field of Neuro-Anthropology hopes to find the link between how people interact with their environment, and each other, and the biological processes within the brain. How does the brain, in conjunction with the environment, affect what decisions we make and what we do? This seems like a mighty task, bringing together the ethical issues inherent to both fields and once again grappling with the age-old question: Nature or nurture? Interestingly, in his paper, "Humans, Brains, and Their Environment: Marriage between Neuroscience and Anthropology?" Georg Northoff reiterates the argument that the separation between nature and nurture is not applicable to the brain; that it transcends both to form a synthesis because it connects to the world in such an intimate way.
Northoff also summarizes a somewhat worrisome study by Gutchess et al., 2006 in which the neural activity of people in western and eastern cultures was compared. They found differences in the way that people processed information based on whether they lived in an individualist or collective culture. This is very interesting and highly relevant to the idea that the nature of our minds is influenced and synthesized to the nurture of our surroundings. But, such research also makes me worry about the focus on the differences between us. Throughout the history of cultural anthropology racist arguments have been made based on the idea that people from opposing cultures are inherently different and therefore "savage" or in need of "civilizing". I don't think that similar arguments could hold up in today's society but does knowing that someone in another society is fundamentally different all the way to the level of brain functioning change how we relate to one another? Does it make differences seem more acceptable since they are part of the internal biological function of the brain? Or, does it make our differences seem even greater and unapproachable?
Also, this new coalition between anthropology and neuroscience makes me think about the limits of the field. Does neuroscience really have the ability and power to describe things as complex as entire cultures. Can it explain why people act how they do, why they create their world in specific ways under different conditions? Does it have the ability to look out at such a macroscopic level? I feel like it might be stretching it bounds a little too far. Can the firing of Action Potentials really tell us this much?
Of course, these are the same tasks that anthropology, as a field, has been trying to tackle for a very long time. So perhaps, it isn't such a pipedream after all. It is, at the very least, a very interesting marriage between two very broad fields.

Source: http://www.cell.com/neuron/fulltext/S0896-6273(10)00141-8
Posted by      Megan M. at 10:19 PM MST
Tags: anthropology
  Andreas Bachtold  says:
According to a blog by an online CV writing Company - The Decade of the Mind is a proposition for a research activity concentrated on four areas of neuroscience, including mental health, abnormal state subjective capacity, education, and computational applications. Sorting out activities to date has essentially included psychological researchers, computer scientists, and engineers, just as physicians.
Posted on Tue, 20 Aug 2019 1:23 AM MDT by Andreas B.

Viruses on the mind: More prevalent than we think?


The past several decades has seen huge leaps in the understanding of neurological disease. Alzheimer's, schizophrenia, Parkinsons, Multiple sclerosis and others have been narrowed down to dysfunction of specific brain regions. What we still barely know, however, is how these regions are damaged in the first place. Parkinson's is caused by damage or decrease in substantia nigra dopamine neurons, but what causes this damage? Sadly, there is no real definitive answer as of yet. Genetics tends to be a factor, but hardly explains every case. Toxins or immunity may play a role, but they too can not fill in all of the blanks.

Viruses have long been known to cause neurological illness, but only a diminishing fraction of all viral infections are those of the brain. Viral encephalitis is a clear case off a viral brain infection marked by inflammation and occasionally fatal inter-cranial pressure. while encephalitis is an obvious brain infection recent research has indicated viruses to be much more neurologically active than we give them credit for.

Many viral infections of the brain have long been nigh impossible to understand. Diseases have long been defined as caused by specific factors, the flu causes the flu and so forth, but many viruses do not exhibit this behavior in the brain. The polio virus can infect the majority of the population, but only a portion are afflicted with motor impairment. Why this is little understood, it is clear that viruses can penetrate the blood brain barrier in infrequent events. These viruses can gain access by stowing away inside immune cells or lapse in barrier integrity.

More and more evidence is coming to light that potential viral brain diseases are not caused by a specific virus, but by the location that the virus is dispersed. viruses that find themselves in a specific structure can cause focal damage to that area. Infection of oligodendrocytes in mice has proven to show MS symptoms, however, it is not clear if damage is caused by viral lytic effects or the immune response to the infected cells. The VSV virus has been shown to indirectly destroy serotonin neurons in rodents. After infection microglia detect the infected neurons and destroy them with the virus inside, leaving no evidence that the virus was ever there. The only real indication of this virus is the later deficiency of serotonin neurons, but there is no indication why they are missing.

As can be seen in the case of VSV viruses can be silent neuron assassins, leaving no trace as to the origins of neuronal demise. While it is difficult to detect any presence of viral infection in the brain, this very nature indicates that they may lead to many neurological diseases. The selective and silent nature of these neuron killers make them a perfect candidate for diseases of specific damage.

van den Pol, Anthony N. Viral Infection Leading to Brain Dysfunction: More Prevalent Than Appreciated? Neuron doi:10.1016/j.neuron.2009.09.023 (volume 64 issue 1 pp.17 - 20)
Posted by      jacob f. at 9:00 PM MST
  Lachlan Chidley  says:
I am form Australian Top IT Rental Company which is providing Best IT solutions for events, organization and tradeshows. My Company has all kind of rent laptop short term things on rent for short term and long term for more information. Kindly visit my Company‚??s website or call them now we are 24/7 available for our respected clients!
Posted on Sun, 30 Jun 2019 10:41 PM MDT by Lachlan C.

Vegas Checklist: Casinos, Clubs, and... Sleep?


In the City of Sin, sleep might be your last priority, yet could sacrificing slumber cost you even more than the unfavorable odds of gambling already predict? According to researchers at Duke University, sleep deprivation results in a strategy alteration toward gain seeking behavior as opposed to protecting against loss during risky decision making. Numerous studies have established that insufficient sleep results in impaired attention, working memory, and learning. In this study, Vinod Venkatraman and his colleagues demonstrated that inadequate sleep also produces a bias in decision-making that is distinct from the general effect of diminished cognition due to the poor vigilance associated with sustained wakefulness.

Twenty-nine adult males, with an average age of 22.34 years, participated in this study. The subjects were presented with a number of complex mixed five-outcome gambles, with each outcome consisting of two positive monetary outcomes, a neutral reference outcome, and two negative financial loss outcomes. The outcomes fell under two categories: Gain-focus trials, and Loss-focus trials. For outcomes offered in the gain focus trials, participants could either choose to maximize their gain by increasing the value of the highest offered outcome (Gmax) or choose to improve their overall probability of winning money compared to losing money (Pmax). In the loss-focus trials, participants were able to either choose the Pmax option or choose to minimize their loss by decreasing the amount of their most negative outcome (Lmin). Following the presentation of all the trials, a few of the gambles were arbitrarily chosen and calculated to an actual monetary loss or gain, which were then shown to the subject. Each participant completed the task after a full night of sleep and again after being forced to remain awake for 24 hours, both times while being imaged by using fMRI technology. The participants were also required to complete the Psychomotor Vigilance task every hour throughout the night when they were forced to remain awake, in order to gather a measure of continual attention.

The results of the study revealed that in the sleep deprived state, participants demonstrated a greater inclination toward seeking gain, witnessed by the larger ratio of Gmax Vs. Pmax choices in the gain-focus trials and a decreased proportion of Lmin Vs. Pmax decisions in the loss-focus trials. Sleep deprivation did bring about decreased psychomotor vigilance in the subjects, yet the extent of the bias shown in risky decision-making did not correlate with the level of reduction in attention. Compared to judgments made following a normal night of sleep, sustained wakefulness resulted in increased activity in the ventromedial prefrontal cortex and decreased activation of the right anterior insula. The ventromedial prefrontal cortex has been correlated with gain-seeking behavior, and the anterior insula has been associated with loss- averse behavior. Combined, the imaging results suggest that sleep deprivation causes individuals to be less troubled by losses and more interested in behaving with the intent of capitalizing gain.

In sum, inadequate sleep causes a prejudice in valuation by augmenting the importance afforded to profit gain as compared to preventing losses. In light of the finding that such judgmental impairment exists independent of measured vigilance, consideration must be given to the possibility that conventional treatments to maintain performance during prolonged wakefulness (i.e. stimulants) may be ineffective. Such treatments might succeed in enhancing attention, yet may not influence separate features of cognition. Thus, for financial sake, perhaps the appointment of a designated decision-maker is as important as the designated driver, when setting out on multiple sleepless nights in Vegas.

Original article can be found at: http://www.jneurosci.org/content/31/10/3712.full
Posted by      Anjali C. at 8:23 PM MST
  peter pen  says:
Such the informative article just use the web and get the full news how do i change my font size in windows 10 i am here you full guide any where and any time free and with out any problem thanks.
Posted on Sat, 27 Apr 2019 4:20 AM MDT by peter p.
  Anthony Anson  says:
Interesting. From the sources of dba writing help in Dubai, sleeping disorder is not a specific disease, it causes because of the life circumstances. It is becoming more common in the younger generation because of their work schedule and sleeps get sacrificed. If a person is diagnosed with the sleep apnea, they shall be prescribed to use a special breathing machine while sleeping.
Posted on Wed, 7 Aug 2019 2:44 AM MDT by Anthony A.
  Raveer Sing  says:
I agreed that insufficient sleep results in impaired attention, working memory and learning. According to career booster reviews it may affect our professional and personal life by losing attention on important things. Proper sleep makes your life healthy and balance.
Posted on Sat, 10 Aug 2019 5:25 PM MDT by Raveer S.

Your Brain on Nirvana


Any student has experienced that moment in class when he cannot for the life of him recall what the professor has just said seconds before. Whether it was because he was distracted watching a gnat fly around the light overhead or because his furiously working writing hand wasn't taking notes quite quickly enough to keep up with the lecture, there are always a few intervals which we miss in our daily lives, because our brains lack adequate attentional resources - unless you happen to be an expert in Buddhist meditation, that is. Among its various purported benefits, which include changes in metabolism and blood pressure, meditation also has been shown to result in altered brain structure and function. In other words, meditation induces neuroplasticity. In much the same way that one can obtain expert proficiency in a foreign language, mental training via meditation can result in increased information processing capacity in the brain.

Meditation is used by an increasing percentage of people to promote relaxation and a heightened sense of well-being. In a study published by the IEEE Signal Processing Society, researchers showed that meditation also leads to increased levels of concentration and reduced attention blink, as well as resulting in enhanced cortical area, in a manner similar to other forms of skill acquisition. The study made the distinction between two types of meditation - Focused Attention (FA) meditation and Open Monitoring (OM) meditation. Utilizing fMRI to measure hemodynamic changes in various areas of the brain, FA meditation was shown to be correlated with activation of the dorsolateral prefrontal cortex; the visual cortex; and the superior frontal sulcus, supplementary motor area and intraparietal sulcus. These areas are associated with our ability for monitoring, engaging attention and attentional orienting, respectively. When an individual meditates regularly and becomes an "expert", the cortical area of these regions in the brain increases. This would seem to indicate that attention is a trainable skill.

In addition to being able to pay focused, long-term attention to a chosen object, meditation experts were also shown in the study to undergo less activation in their amygdalas in response to emotional stimuli. This would seem to imply that emotional behaviors are not compatible with a stable state of advanced level concentration, and also that our emotional state can be consciously controlled, to some extent.

The implications of attention as a trainable skill appear to be numerous. For example, let us consider Attention Deficit Disorder (ADD). It would seem that individuals who suffer from a seeming lack of ability to focus for prolonged periods of time might benefit from practicing meditative techniques, where the mind is calm and focused for prolonged periods of time. In addition, the general population might also benefit from the ability to reduce "neural noise" and thus pick up more information from the environment more quickly, rather than becoming overwhelmed by the constant data input. For students, their ability to focus in class and process more information more efficiently could have considerable impact on their learning. Many aspects of the impact of meditation on the human brain are as yet still unknown, but it would appear that it has profound effects on attention learning through the creation of novel synaptic connections, in addition to its role in promoting cultivation of general mental and emotional health.

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2944261/
Posted by      Clarinda H. at 6:36 PM MST
  Sarah hamilton  says:
The blazes singed, and the smoke gagged, and the Basques were caught. One of their biggest, a heavy metal forger, still had his sledge and figured out how to thump free a pivot - constraining a little space open through which the Basques could get away from their firey fate.
http://www.getessaydone.com/
Posted on Thu, 11 Oct 2018 4:00 AM MDT by Sarah h.

What will they think of next?


Who knew? In the 1960's up until the 1970's ablative stereotactic surgery was used to treat neurologic disorders and neuropsychiatric illness. This treatment was largely abandoned after the 70's due to the development of highly effective drugs to treat these problems, for example, "Levodopa" to combat Parkinson's disorder. Today there seems to be a virtual renaissance of similar techniques used to help those suffering.

The technique being employed uses high-frequency electrical deep brain stimulation (DBS) on specific targets to negate some disorders. Compared to the traditional ablative stereotactic surgery, which consists of lesions and very invasive brain surgery (irreversible), DBS is much less invasive in some respects. By applying high-frequency electrical stimulation to specific brain structures a similar (but different) effect of a lesion is essentially observed. Ever since this technique's rise in popularity (starting in the 1990's) people have the option of a "less permanent". These electrical pulses are delivered by electrodes chronically implanted into a persons brain at specific regions. The exact mechanism of action for DBS still isn't fully understood and clear, but the affects and benefits to patients are both lasting and clear.

Some of the diseases mentioned in the article include Parkinson's, Tourettes syndrome, obsessive compulsive disorder and depression. Patients receiving DBS to treat Tourettes syndrome had a >70% decrease of vocal or motor tics with disappearance of sensory urges. 35-70% of patients receiving DBS to treat OCD were benefitted by a significant reduction in obsessive and compulsive thoughts.

In my opinion, and it seems to be the case with most neurosurgical operations, DBS is the latest and greatest treatment available. Anytime patients can avoid a permanent/irreversible effect such as a lesion the better. My reasoning behind is vast. For example if a patient is suffering from body dissociation disorder and doesn't identify with their right arm and right leg and wants to have these two limbs removed. This persons could amputate these limbs without fully understanding the long term consequences involved or even without any benefit mentally. Or perhaps, the doctor could try a different technique, such as lesioning a brain region located using fMRI thought to be triggering body dissociation disorder. There is a chance the lesion might not properly treat the disorder or not treat it at all. Also the lesion may impair the individual in a more negative way in the long run, and since lesions are practically irreversible, the person is worse off. If DBS was used (tmi could be used as a pre-emptive mapping tool) the patient could be treated for their disorder in a non permanent way and avoid negative, unforeseen, long term issues.

I'm not entirely sure how invasive DBS is but the article made it out to be much less invasive as previous surgeries. Which to me makes sense since over time medical practices should become more and more efficient. Something haunts me about the fact little is truly known and fully understood about DBS and TMI. Little red flags go up in my head every time that fact is mentioned. Whether or not it is effective and beneficial I would prefer to know exactly why it is effective and beneficial before doctors implanted electrodes in my brain to deliver pulses of high-frequency electricity. This honestly sounds like something out of a science fiction story but the real freaky part is it seems to actually work. The big question is: Would you ever have DBS performed on yourself? My answer is yes.

http://www.sciencedirect.com/science/article/pii/S089662730600729X
Posted by      Dylan R. at 5:42 PM MST
  Matthew Jennings  says:
Really happy to find such an informative blog. Personally I loved this type of blog. Nice blog on Meta object protocol . Looking forward for buy essay online australia more blogs.
Posted on Thu, 11 Jul 2019 7:08 AM MDT by Matthew J.
  ellie jame  says:
Posted on Thu, 25 Jul 2019 4:01 AM MDT by ellie j.
  Jhon Ron  says:
Very informative post. The cure Neurological disorders and neuropsychiatric illness https://icasinoreviews.info/win-instant-cash-online-nz/ is a great danger to the patient and his family. The treatment enlisted are really helpful in curing the affected person to revitalize their life.
Posted on Wed, 28 Aug 2019 2:13 PM MDT by Jhon R.

To Toke, or Not to Toke: Who Knows the Answer?


Growing up during the "war on drugs", we have all heard the plethora of myths and propaganda used to keep youth away from drugs, especially marijuana. "If you smoke dope you will go on to do heroin." "Marijuana will kill your brain cells." "Marijuana causes people to become violent and irrational." Despite these claims, it is a common belief, often found through personal experiences, that this may not be the case (not to mention the lack of data supporting the claims). This blurriness is not only seen in claims regarding the drug's effects, but in the contradiction between state and federal laws. There are currently 17 states (including Washington DC) which permit the use of medical marijuana for a variety of prescribed medical issues; while federal law on the other hand continues to classify cannabis as a Schedule I drug, defining the substance as having a high potential for abuse and having no accepted medical use in the United States. Wait, what duuude? Yep, you read it right! While United States law claims marijuana has no medical benefits, the state of Colorado currently supports over 100,000 medical marijuana patients (Colorado Medical Marijuana Industry)1.

So with such a lack of consistent information regarding cannabis' effects, how do we know what to believe? In 2006, neuroscientists Ivan Soltesz and Kevin Staley teamed up in order to attempt to identify any relationship between cannabinoids and memory. This research examined the effects various cannabinoids, such as THC, a phytocannabinoid that is the major psychoactive principle of marijuana, and CP55940, a synthetic cannabinoid, on the CB1 cannabinoid receptors. This type of cannabinoid receptor is the most abundant G-protein-coupled receptor in the brain and has an extremely high density in the hippocampal formation, suggesting a possible link between cannabinoids and memory deficits.

This study found that in vivo, THC depressed hippocampal and neocortical EEGs at several frequencies. This process was then repeated using the synthetic cannabinoid CP55940, which acts as a CB1 receptor agonist, to confirm THC acted as an agonist to the receptor. Similar results were observed, finding that these cannabinoids lessened the power of hippocampal EEG activity in theta, fast ripple, and gamma oscillations. These oscillations play a critical role in several memory functions such as working memory, coordination of neuronal discharges across regions and memory consolidation. These results successfully show a correlation between memory deficits and the binding of an exogenous cannabinoid receptor agonist to hippocampal CB1 receptors. As a control, these trials were repeated, this time preadministering SR141716A, a CB1 receptor antagonist. As expected, the effects of the cannabinoids were successfully blocked.

This research is important for providing the groundwork for future marijuana research, which can be useful in future memory studies as well as studying models of addiction. So, before you go light up that joint, remember not everything you hear about marijuana is a myth: phytocannabinoids in marijuana are associated with memory deficits.

Sources:

Soltesz, Ivan, and Kevin Staley. "High times for Memory: Cannabis Disrupts Temporal Coordination among Hippocampal Neurons." Natureneuroscience.com. Nature Neuroscience, 2006. Web. .

1 Colorado Medical Marijuana Registry
Posted by      Hannah M. at 5:04 PM MST
displaying most recent comments (2 ommitted) | Comments (5)
  smith sera  says:
Law Essay Teacher UK is the first station of help for many college students who are struggling to write a standardized academic essay. Every year we are approached by hundreds of students from across the globe for resolving their law coursework help issues. Call us today to find out more about our state-of-the-art services.
Posted on Fri, 26 Apr 2019 1:00 AM MDT by smith s.
  Margaret Gipson  says:
The war on drugs is most difficult. Its use can hurt the people around them. When someone is addicted to drugs, they cannot resist the urge to use them. I am doing business for providing the captain america biker jacket. I have seen the employees that are addicted to drugs are affected badly.
Posted on Wed, 10 Jul 2019 1:42 AM MDT by Margaret G.
  Anthony Anson  says:
According to the cheap dissertation writing article on the drug addiction, the initial decision to take drugs in voluntary. That is true, not everyone who tries drugs becomes an addict. If we talk about marijuana, there are signs and symptoms of marijuana abuse like lack of motivation, bloodshot eyes, increased appetite, dizziness, anxiety, dry mouth, and memory impairment. This addiction can have a number of damaging consequences on an addict's well-being.
Posted on Tue, 6 Aug 2019 7:13 AM MDT by Anthony A.

Apple or Kit-Kat?


Obesity and addiction have been increasingly detrimental health conditions affecting the American people. Can these diseases be solved by simple verbal cues which trigger top-down processing in the brain to lead to more healthy/abstinent choices? A study by Hare et al found that, indeed, they can. The ventromedial prefrontal cortex (VMPFC), the inferior frontal gyrus (IFC) and the dorsolateral prefrontal cortex (DLPFC) are three brain regions that have been implicated in behavioral choices and decision making. Will the influence of health cues on self-control processes impact decision making in a more healthy and positive way?

Hare hypothesized that dietary choices will be improved by engaging the self-control processes in the DLPFC and IFG and that these choices will be affected by the value signals of the VMPFC, enhanced by the health attributes put into light during the experiment. Subjects were given four types of food to choose from, ranked healthy-tasty, healthy-untasty, unhealthy-tasty and unhealthy-untasty. It was insinuated before the decision making process that the subjects should either make a choice based on health factors or tastiness. fMRIs were used to observe the neural mechanisms at work during decision making. The study yielded a positive correlation between the VMPFC and the subjective stimulus value of each food option.

To understand the works at the neural level, let us find out what role the DLPFC plays in this cascade. The DLFPC has two regions, one, the DLFPC-M, guides the subject to make a healthy decision no matter what. This goal to be healthy is reenergized by health cues which in turn jumpstart a cascade of top-down control processes which govern decision making. The DLPFC-U however, is meant to take into consideration task instructions and to make a decision based on what the individual prefers regardless of the attribute values associated with them. These two areas of the brain come into conflict during this task. When the subjects were told to pick tasty foods, the DLPFC-M M signals to the IFG, which signals to the VMPFC to put more emphasis on the health attributes during decision time. In conclusion, health cues rendered the healthiness of food more important in the VMPFC and increased the likelihood that subjects will behaviorally pick the healthy-untasty food option.

This experiment proves that self-control processes in the DLPFC and IFG triggered by health cues are active during dietary choice and increase the subjective stimulus value of an item in the VMPFC, hence biasing behavioral choice. This study can be incorporated when developing treatments for obesity and addiction.
Posted by      Dora P. at 4:41 PM MST
Tags: fmri

Optimism: Is too much a bad thing?


We've all been told at one point or another in our lives to look on the brighter side of a given situation. Most of the time we do because the brighter side brings some sort of happiness and therefore when look on the brighter side of a situation, it helps us by easing the negative feeling we have towards that situation. And so by looking on the brighter side, we keep ourselves positive and our stress levels down a bit. But how can you still be optimistic even though there is information that goes against what you believe? As I go through the article, How unrealistic optimism is maintained in the face of reality, I will hopefully answer this question.
In this article, Sharot et. al. tries to explain why it is that some of us are so optimistic and could it be a bad thing? The article focuses on the events in which people do not take the necessary precaution they need to in order to protect themselves, that being the underestimation of future negative events, and why they were adamant about not changing (Sharot et. al.). So the way the experiment was conducted was Sharot et. al. took participants and told them to estimate the probability that an event would happen to them and then measured their brain activity. There was a total of eighty events that were "tested" all of which were adverse life events such as house hold accident, adultery, owing a large amount of debt, etc. They then combined a learning task with fMRI. This allowed Sharot et. al. to identify how blood oxygen level-dependent (BOLD) signals track estimation error in response to whether the information given lead to optimism or pessimism (Sharot et. al). To determine estimation error, they used the equation: estimation error = estimation - probability presented. They also used questionnaires to see if people changed their beliefs of an event based off of some kind of emotional arousal, how bad an event is, if they were familiar with the event, or if they have encountered such an event before.
Their results were that there was this region of the brain, right inferior frontal gyrus, in which showed a reduction for neural coding of undesirable error regarding the future for people who were optimistic. They also found that the reason there was this asymmetry in people changing their beliefs was due to a reduced expression of an error signal in the region implicated in processing undesirable error regarding the future (Sharot et. al.). The questionnaire that was administered showed that people didn't change their beliefs due to the severity of the event, if it is familiar or not, or if they have encountered it or not. The BOLD signal tracking showed that people with the largest optimistic update bias failed to show any undesirable error meaning the relationship between undesirable error and BOLD signaling was close to zero, where as people who did not show a selective updating in belief showed a strong relationship between undesirable error and BOLD signaling.
So it didn't matter whether how bad the future event was going to be, whether it was familiar or not, or if it has been encountered before but due a lack of not being able to code and process this undesirable error regarding the future. So really being optimistic or being optimistic even after information has disproved your belief isn't in your absolute control because if your brain fails to code and process it you can't really do much about it. Though you possibly could in theory but that raises questions for another time.

Sharot, Tali, Christoph W. Korn, and Raymond J. Dolan. "How Unrealistic Optimism Is Maintained in the Face of Reality." Nature Neuroscience. Nature America, Inc., 9 Oct. 2011. Web. 3 Dec. 2011. .
Posted by      Kou X. at 4:15 PM MST
displaying most recent comments (27 ommitted) | Comments (30)
  Nicole Zamora  says:
A lot of best ideas in your blog site thanks for offered Male Extra Reviews
Posted on Tue, 3 Sep 2019 6:33 PM MDT by Nicole Z.
  robert frcrocke  says:
I am continually amazed by the amount of information available on this subject. What you presented was well researched and well worded in order to get your stand on this across to all your readers. cracker barrel employee website
Posted on Thu, 19 Sep 2019 4:03 AM MDT by robert f.
  robert frcrocke  says:
hi was just seeing if you minded a comment. i like your website and the thme you picked is super. I will be back. security services in Nottingham
Posted on Fri, 20 Sep 2019 2:13 AM MDT by robert f.

Pay Attention! The Relationship Between Memory and Attention


Memory and attention are not always seen as being related, however Johnson and Chun's article, the connections between the two of these processes are looked at in detail. This article would be considered a meta-analysis of several studies on the topic of memory and the different attention states themselves. Both perceptual and reflective attention was studied and reported, showing interesting results about how the brain categorizes and retrieves memories in relation to these types of attention. The majority of the studies that were focused on within the article had been arranged and performed by the authors of the article, giving an interesting perspective.

Neuroimaging was used to determine the areas of the brain most activated by certain stimuli. Functional magnetic resonance imaging (fMRI) was the technique most referred to in this article; fMRI's are used to visualize the neural activity based on the hemodynamic response of glucose release within the brain. With this technique, it was seen that similar areas of the brain are activated when a stimulus is first observed (what the article refers to as perception) and when the same stimulus is being recalled (referred to as reflection). For instance, a cue for a visual memory will cause a higher activity level in the visual cortex in the same way that the visual cortex was originally stimulated when the cue was first observed. Similar responses are seen in both short term and long term memory recollection.

It has also been observed that certain activities that relate to either perceptual attention, like repetition attenuation, or reflective attention, like reactivating and retrieving, activate areas that are generally similar to the areas activated during the experience of remembering. Beyond the areas originally involved, there are other areas involved in the processes of memory and attention. These areas include frontal and parietal areas such as the hippocampus, the anterior cingulate cortex, and other various areas of the frontal and parietal lobes of the brain. The article demonstrates that refreshing perceptual events, using both types of memory also shows similarity in both activity levels and in the sections that are activated. The studies have shown that there can be severe interference if a participant is told to recall multiple objects or situations that were encoded with similar attention states and that are located in similar areas.

This article could have improved if it had looked at multiple sources of stimulus rather than just visual stimulants, as there could be vastly different results from memories of different senses. Furthermore, reviewing their own experimental studies could give rise to a bias in the analysis of the studies. However, this article did bring up some important ideas.

The conclusions drawn from this article could lead to many other topics of research that could help in the understanding of how the ways of memory, attention, and how they are able to work together. Further knowledge of these relationships could provide information on how to improve educational systems and could promote more effective ways of learning.




Chun, M. M., Johnson, M. K. (2011). Memory: Enduring traces of perceptual and reflective attention. Neuron, 72(4), 520-535. Retrieved from http://download.cell.com/neuron/pdf/PIIS0896627311009615.pdf?intermediate=true
Posted by      Breanna S. at 4:06 PM MST
  Christina Uhlir  says:
Nice article :P
Posted on Mon, 5 Dec 2011 7:55 PM MST by Christina U.

Lick, Lick, Lick...Mice and Autism


Autism. It seems as if everyone knows someone diagnosed with autism in today's society. Living with autism is no walk in the park and is sometimes harder for those surrounding the person, than the actual person diagnosed with autism, because of their behavior. Since mice models for autism began, about ten years ago, researchers have discovered a slew of new information, including genes liked to the disorder. While the ultimate purpose of mouse models is to provide testing grounds for drugs and to understand the molecular underpinnings of autism in the brain, sometimes research needs some regulations in order to reach that goal.

Nature Neuroscience is in the middle of presenting a special on autism titled, "The Autism Enigma." The special includes a variety of topics, blogs, papers, and news on autism to choose from, all of it great information. After reading several articles, one titled "New mouse models of autism highlight need for standardized tests." Williams argues that as more and more models are being made, it is necessary for standardized tests to be created in order to study and compare the effects of the various genetic mutations more clearly "that are collectively providing the field with a window into the brain structure, neuron function and cellular pathways associated with autism, as well as a platform for testing new drugs." Also, similar in most research, people have different ideas of what something is; for example, the paper uses repetitive behavior, for what one person may call repetitive behavior, like the repetitive licking of it's paw, another may not. Therefore standardized testing is required in order to further improve research in mouse models of autism.

Williams' paper also details information on current research being conducted. She reports results from Daniel Geschwind of UCLA, Matthew Anderson of Beth Israel Deaconess Medical Center, Guoping Feng of the Massachusetts Institute of Technology's McGovern Institute for Brain Research, Paul Worley of Johns Hopkins University, Jacqueline Crawley of NIMH, and Alea Mills of the Cold Spring Harbor Laboratory. Geschwind's mouse model had a mutation in CNTNAP2, which has been linked to both a familial epilepsy disorder and autism. This relation between epilepsy and severe autism, I have seen in a family friend so the fact that they are discovering genetic links is very promising. Anderson created a model with duplications in Ube3a, a gene involved in protein degradation which is linked to autism in humans. His mice ignored other mice, avoided new toys, and were abnormally quiet. Feng presented the newest model, showing that deleting Shank3, "which encodes a protein that helps stabilize synapses between neurons, in mice produced the same three core symptoms of autism as seen in people: abnormal social interactions, communication deficits and repetitive behavior." Worley also worked with Shank3, except they deleted only one exon of the Shank3 gene and found similar social problems within the mice's behavior. Crawley and Mills seem to have taken a different approach in which they observe communication and movements in the mice through highly sensitive microphones and video tracking technology. They hope to look at it from a behavioral aspect instead of neurogenetical in order to "take the blinders off" and view every aspect of the mouse.



Nature Neuroscience

http://www.nature.com/nm/journal/v17/n11/full/nm1111-1324.html#/references
Posted by      Ashlyn C. at 3:41 PM MST

Morphine: Friend or Foe for People Dealing with Long-Term Pain?


Lights reflect off of the gleaming road in the rain. The highway embankment funnels traffic into a valley where pools of black water flood the lanes. Your car suddenly hydroplanes and you veer into the other lane. The sound of metal wrinkling like tin foil ricochets in your ear. You wake up in the hospital with a horrendous pain emanating from your broken legs. Morphine is dispensed to ease the pain. Rehabilitation is slow and the pain is incessant. Tolerance develops. Withdrawal symptoms like abdominal cramps and depression begin. You want to stop taking morphine, but the pain in your legs persists and makes it difficult to walk. What do you do?

Morphine has the possibility to help those in pain, but it also has the potential to create a dangerous addiction. Morphine has been in use since Byzantine times because it's a powerful and effective painkiller. Research is now looking into morphine's mode of action in the body, to better mitigate the unfortunate side effects of tolerance and addiction for long-term pain control. In the November 30th issue of the Journal of Neuroscience, a group led by Dr. Ping Zheng in China found that chronic morphine treatment actually switches the effect of dopamine from inhibition to excitation on pyramidal cells of the basolateral amygdala. Pyramidal cells in the BLA are involved in emotion. Excitation of these cells could change the emotional response, which is especially important in withdrawal, when negative feelings can contribute to a relapse.

The researchers used rats to test the effects of chronic morphine treatment. They induced morphine tolerance in rats and then used brain slices to study excitatory postsynaptic currents (EPSC) using the whole-cell patch clamp method. Compared to the control group injected with saline, the morphine treated rats had higher amplitude EPSCs by 50%. After this observation, the team wanted to investigate the reason behind this change. They used a dopamine D1 receptor antagonist in the morphine treated rats, and the EPSC was now the same as the saline control group. Thus they concluded a change in D1 receptors is responsible for the excitatory response.

But what changed about the dopamine D1 receptors at the molecular level? The researchers determined that morphine treated rats had a higher release of glutamate from the presynaptic neuron. Looking at the expression of D1 receptors using Western blotting, they saw there was increased expression of D1 receptor, versus saline. The researchers hypothesized this increased expression might be dependent on protein kinase A (PKA) so they tested this with a PKA inhibitor. They indeed found that the increased release was due to PKA activation.

To supplement these studies, a behavioral test called conditioned place aversion (CPA) were performed on the rats. In this test, rats were placed in one section on days when they received the drug, and in another section on days when they did not receive the drug and were experiencing unpleasant withdrawal symptoms. The rats were then allowed to freely go into either section, plus a third section. Time was clocked for how long the rats spent in each section and the CPA score was determined by the difference between the time spent in the withdrawal-paired compartment divided by the time spent in the drug-paired compartment. The researchers used this to test to determine whether the increase of D1 receptors is responsible for the withdrawal induced conditioned place aversion. The morphine rats strongly avoided the withdrawal compartment, but when a D1 receptor antagonist was injected into their BLA, they no longer avoided that compartment. Therefore, D1 receptors are responsible for part of the withdrawal process.

This study could lead us to understand more about the molecular nature of morphine tolerance and addiction. Using these findings, new ways to combat the negative side effects of morphine use could be implemented.

Li, Z., Luan, W., Chen, Y., Chen, M., Dong, Y., Lai, B., Ma, L., Zheng, P. (2011). Chronic Morphine Treatment Switches the Effect of Dopamine on Excitatory Synaptic Transmission from Inhibition to Excitation in Pyramidal Cells of the Basolateral Amygdala. Journal of Neuroscience, 31(48): 17527-17536.
Posted by      Amanda W. at 1:42 PM MST

Why Keep A Promise?


It is interesting to see the importance humans place on a promise. A promise is not visible or tangible yet it still seems to have a strong, compulsory quality to it. Why is that? The truth of the matter is humans have the exceptional capacity to establish social norms and create understood cooperation among each other that is not seen elsewhere in the animal kingdom. Before society's infrastructure of rules and laws existed, promises were still made as a way to ensure trust, teamwork and partnership. Furthermore and perhaps the most intriguing aspect of a promise is that it is a verbal, nonbinding agreement. Yet despite the lack of concrete liability we still make promises every day.

Some research looking into the systems of the brain involved in nonbinding agreements has been done but there are still more questions than answers regarding of this topic. Using promises as a premise for research opens a unique door because promises can either be kept or broken. They can be made for many reasons but there are two justifications for keeping a promise. The first is to ensure future trust and cooperation and is referred to as an instrumental reason. The second rational is because it is the right thing to do and is called the intrinsic reason. The study in this paper focuses on the latter of these two explanations.

Each trial of the experiment had two subjects, a trustee and an investor. The trustee's brain activity was measured. First the trustee promises the investor to always, mostly, sometimes, or never keep their promise. In this study to be trustworthy means sharing the money made equally. The investor could choose to invest or not and then the trustee could choose to keep or break their promise to share the money. The trustee could choose both the strength of their promise and whether or not to keep their promise. These freedoms of choice led to two main groups of trustee subjects: both groups almost unanimously promised to "always" keep their promise but when it came to keeping the promise the subjects split into either the group who honored their promise or who was dishonest.

This study was the first to create a design looking at three different processes that play a role in promises. The first stage is the promise stage where the promise is made, then there is what is called the anticipation stage while they wait for the commitment of the investor, and finally the decision stage where the promise is either kept or broken. Researchers could differentiate subjects who will keep their promise and who will break it by brain activity during the promise stage, when the deceitful act is already planned.

This study found that all stages of the paradigm revealed different, highly specific activation patterns in the brain. The promise stage is where the dishonest act may be already planned but not yet implemented and researchers hypothesize if the subject already plans to break a promise, this misleading gesture will induce an emotional conflict. This emotional clash shows activity in parts of brain involved in conflict and negative emotional process such as the anterior cingulated cortex or amygdala. The anticipation stage showed parallels in brain activity to personality traits such as depression and neuroticism, both of which are associated with negative expectations of the future. When the subject had to decide to keep or break the promise, breaking the promise showed similar brain activity to the emotional process of telling a lie and the guilt that that involves. This study showed plausible evidence tying nonbinding agreements to emotional and logical processes of the brain. This evidence is critical in explaining why humans value and venerate the simple idea of a promise.



Baumgartner, Thomas, Urs Fischbacher, Anja Feierabend, Kai Lutz, and Ernsty Fehr. "Broken Promises." Neuron 64.5 (2009): 756+. Science Direct. Elsevier Inc, 10 Dec. 2009. Web. 5 Dec. 2011. .
Posted by      Bethany B. at 10:48 AM MST
  Sarah Bennet  says:
Amazing blog and very emotional. A promise is not a concrete thing but it has feelings and quality to bond two people with trust. Everyone should need to read this and learn the important message from this. dba writing help
Posted on Wed, 3 Jul 2019 3:34 AM MDT by Sarah B.

A Terrible Mistake Has Been Made


Human Immunodeficiency virus (HIV) today has been wrecking the lives of the people today. Categorized as one of the top world killers in the world, scientists have worked hard to find the cure. For those who are patiently waiting for a cure, many participate in CARTs or combination antiretroviral therapy which uses the combination of different antiretroviral drugs to stop HANDS or HIV associative neurocognitive disorders. Patients participating in this therapy were found to experience increase damage to brain. It was concluded the therapy is not an effective way to combat HANDS. Mihyun and her team exposed hippocampi of rats to gp120 for different lengths of time, than inhibited suspected proteins of the pathway such as CXC4R and then viewed the results with Open Lab software and calcium imaging.

Gp120, a surface protein that functions to bind to T-cells, is a toxin that enhances NMDA activation, but how, no one knows how. Studies found by disrupting the trafficking of NMDA resulted in disorders such as Alzheimer's. Evidence suggested that gp120 assembles NMDA receptors into clumps or modified microdomains. This occurred by increasing the size and stability of lipid rafts which are involved in receptor trafficking. Mihyun and her team used this theory as a baseline to discover the mechanism.

The team was successful in finding a mechanism. First gp120 enhanced the transport of NMDA receptors into the membrane by signaling phosphorylation of the C terminal which regulated transportation of NMDA. Exposure of gp120 to the hippocampi was found to increase the levels of phosphorylation, specifically phosphorylation of serine 897 and serine 896. Finally inhibition of PKA or PKC resulted in halting gp120 activity. PKA and PKC were thus concluded as the kinases activated by gp120 to phosphorylate the C terminal.

Then gp120 stabilized NMDA receptor microdomains by increasing the size of lipid rafts. Ceramide, a substance used by lipid rafts, was believed to be involved in increasing the size and stability of lipid rafts. Ceramide is synthesized by hydrolysis of sphingomyelin, a type of lipid. By blocking hydrolytic pathways in the hippocampi, the lipid rafts were observed not to be increase in size. In particular the enzyme nSMase2 which hydrolyzes sphingmyelin, was found to be the one responsible for increasing lipid raft sizes. Mihyuan took this further and inhibited key factor from a separate pathway that also increased lipid rafts. CXCR4, a protein that HIV uses, was found to increase lipid rafts with the use secondary messengers called IP3 and PKC.

Finally Mihyun and team found that by stabilizing the lipid rafts, NMDAR receptors were prevented from dispersing from the microdomains. Gp120 were first exposed to hippocampi then exposed to Beta cyclodextrin, a drug that is used to disrupt lipid rafts.
Mihyun and her team had made a great contribution which will has brought us one step closer to finding a cure. Though it may seem like only a baby step, at least we are one step closer.

Bae, Mihyun, et al. "The Human Immunodeficiency Virus Coat Protein gp120 Promotes Forward Trafficking and Surface Clustering of NMDA Receptors in Membrane Microdomains." The Journal of Neuroscience 31.47 (2011): 17074-17090
Posted by      Erika L. at 8:31 AM MST
  Katie Bell  says:
HIV has been found to be the etiological operator of Aids in 1983. Since its starter discovering, HIV might be tentatively demonstrated to have the option to exist inside two unmistakable structures: HIV-1 and furthermore HIV-2, the previous which is entirely more destructive just as broad than the last mentioned at Best Coursework Website . While HIV-1 represents more contrasted with 99% of overall contamination, HIV-2 makes up about a straightforward 0. 11% and is for the most part limited to areas to India and furthermore Western Africa
Posted on Fri, 26 Jul 2019 7:37 AM MDT by Katie B.

Move now or move later?


We have all done this. While driving down the road, perhaps on a mountain path, you round a corner, and suddenly there is an obstacle, such as a boulder or another car in your path. In order to avoid crashing, a plan must be made quickly, requiring both visual and motor controls. Do you slam on the brakes, or perform dangerous swerve in order to avoid the object before you? Reacting immediately gives you sufficient time to perform an action, but extra information is lost to choose which action is best. For example, say you immediately decide to swerve to avoid the car in front of you, but as you swerve into the other lane, you realize there is another car coming in the opposite direction. If a bit more time were taken to observe the situation, you probably would have chosen to slam the brakes instead. On the contrary, the more time you spend before acting allows you to gather greater sensory information, aiding you in making a wiser motor action, but as a consequence, there is less time to perform your decided action. Although so much is going on, we perform these actions automatically.

Peter Battaglia and Paul Schrater have researched this very phenomenon. In their study, they allowed participants to control the time spent on looking and the time spent on action so they could visual accuracy for motor accuracy as needed. To do this, participants were placed in front of a computer with a touch screen. They were instructed to put their finger on a start button, which made a target appear on the screen, and glide the finger over to the target before a timer ran out. The computer recorded the time spent on sensory input (how much time between touching the start button and initially moving the finger) and the time spent on motor action (how much time between the initial finger movement and hitting the target). Upon starting the simulation, dots began appeared on the screen, representing new visual information, until the movement for the target was initiated. Different trials were performed where varied amounts of dots were originally placed on the screen before the trial began. Afterwards they quantified the results of low medium and high dot density.

The results showed that when there were more dots on the screen initially, the participant spent more time viewing before making that first movement. How does this apply to us? Basically, when we have to make quick motor decisions, our brains will automatically balance time spent viewing the situation and the time before initially making an action, based on how much sensory input is around. The more sensory input, the longer the brain takes to make a motor decision. So, if you were to round a corner on the road and see a car in the path while there is a lot of visual input, such as pedestrians, traffic lights, and bikers, it will take more time for to make a motor decision than if the road was empty. This balance between visual accuracy and motor accuracy exists inherently in everyone, allowing us to make the best decision possible when little time is available.



Battaglia, P., and P. Schrater. 2007. Humans trade off viewing time and movement duration to improve visuomotor accuracy in a fast reaching task. The Journal of Neuroscience. 27: 6984-6994.
Posted by      Kara G. at 8:24 AM MST

December 4, 2011

Neuropsychiatric Disorder Models: Improvement Needed!


Neuropsychiatric disorders such as schizophrenia, depression and bipolar disorder are serious health problems. They have substantial negative affects on a significant subset of the population and are still largely not understood. While the molecular targets of many psychotherapeutic drugs have been successfully reverse engineered, this was done in the 1960's. Despite ongoing efforts to further understand these disorders, little progress has been made since then. This raises the question: why? Two scientists, Eric J Nestler and Steven E Hyman have published an article in the journal Nature Neuroscience seeking to find a solution to this. In their paper, Animal models of neuropsychiatric disorders, they claim that the primary thing holding back research in the field is the difficulty of creating animal models of human psychiatric diagnoses. The authors then seek to contribute to the field by doing an analysis of the currently used models and discussing which ones are most likely to be valid and productive.

It can be very difficult to translate between animal and human thoughts and feelings. Whenever one does so they risk unfairly anthropomorphizing. Since animals are incapable of reporting their feelings researchers need to find round-about ways to determine what is actually going on within an animal's head. The typical methodology then used to study animal behavior and use it as a proxy for mental activity. However for most of the neuropsychiatric disorders that are professionally addressed what constitutes a legitimate disorder is not clearly separated from what constitutes normal variation. Furthermore, the same neuropsychiatric diagnose can be proscribed to two completely different sets of symptoms. This leaves researchers in a position where they must decide for themselves what constitutes a legitimate disorder, how to define it, and the subsequently how it can be represented in behavioral models.

In order to usefully discuss the efficacy of studies relating to these neuropsychiatric disorders, the authors of the article referred to a framework for validating studies with the components construct, face and predictive validity. Construct validity is a measure of how well a model's construction is relevant to a disease. Face validity is a measure of how well a model reiterates the physical and behavioral features of a human disease. Predictive validity is a measure of how well a model's response to treatments compares to patients actual responses to these same treatments.

The article then discusses different things that can be modeled in schizophrenia, depression and bipolar disorder and the validities of potential models. For schizophrenia, it is stated that blunted affect, asocial behavior, diminished motivation and deficits in working memory and/or conscious control of behavior are all symptoms that a behavioral model should seek to measure. The article claims that a good way to creat constructively valid models is to use genetic animal models with highly penetrant human mutations, although it doesn't consider these models to be perfect. It also states that a good (but not sufficient on its own) measure of face validity is a deficit in prepulse inhibition (PPI) of a phenomenon where weak starting stimuli reduce a startle response produced by a following more intense stimulus.

For depression it's stated that neurovegetative symptoms such as abnormalities in sleep, appetite, weight and energy along with psychomotor agitation or retardation are all potential indicators. With the caveat that no abnormality has proven sufficiently robust or consistent enough to validate an animal model the paper claims that chornic social defeat stresses along with chronic mild and chronic unpredictable stress are all capable of inducing states of depression which have some face value. These methodologies are criticized however as potentially setting off an anxiety disorder with similar symptoms instead of modeling depression. The authors suggest that measures of other homeostatic symptoms such as alterations in sleep, circadian rhythms and feeding with attendant metabolic parameters would strengthen claims of depression in animal models.

For bipolar disorder it's stated that the diagnosis comes from periods of mania with or without depression. The article states that transgenic mice have exhibited manic-like behavior when they were programmed to over express glycogen synthase kinase-3beta. These mutants are assesd to meet partial criteria for face validity along with predictive validity. However they failed to meet requirements for contruct validity. The article suggests that mania investigative studies use a broad range of behavioral tests and interpret their data cautiously.

Finally, the article listed some generalized recommendations towards researchers. These recommendations included listing the specific aspects of the illness meant to be model and stating the types of validators applied to the model. The researcher also noted that construct validity is most compelling of the different validities and that it's best to use a broad range of behavioral assays.

It's clear that research into these neuropsychiatric disorders still faces a great deal of hurdles, especially when it comes to assessing bipolar disorder. However, as this paper shows, there is constructive focus being brought to the forefront of this area. With genetic and technological advances combined applied to models with clearly stated rationales and sober discussion of validity significant progress can potentially be made in the field.
Posted by      Michael A. at 11:54 PM MST
  Michael Asnes  says:
Nestler, Eric J., and Steven E. Hyman. "Animal Models of Neuropsychiatric Disorders." Nature Neuroscience 13.10 (2010): 1161-169. Print.
Posted on Sun, 4 Dec 2011 11:54 PM MST by Michael A.

Chunky Monkey, No Chunky Mice


If I were to tell you that some of my favorite things to do would be to ride my mountain bike constantly, rock climb, go white water rafting and cruise the slopes on a pair of skis you could probably hazard a guess as to where I am from. Living in Colorado for most of my life has given me access to a large playground of mountains in which to partake in numerous activities. I am fortunate none the less to live in one of the fittest states in the nation. According to Calorielab, Colorado is considered the slimmest state in the United States. If we look at our country on a whole, we see that we are one of the most obese countries in the world. According to USA today, out of 33 countries with advanced economics, the United States is the fattest nation. All other factors aside the US ranks 3rd in the world just under a few small pacific islands. The interesting thing is that anywhere between 16 and 33% of children and adolescents are obese.

Obesity is the result of unhealthy eating habits, genetic influences, biological influences and even cultural factors. For years scientists have studied obesity in an attempt to help lower the amount of deaths each year and improve their overall quality of life. Recent studies suggest that there may be a neurological component to obesity.

Thaddeus Unger and colleges were testing both mice with normal levels of BDNF and also mice with BDNF receptors blocked. What they found is that by using viral mediated selective knockdown of BDNF in the ventromedial hypothalamus and also the dorsomedial hypothalamus they were able to see BDNFs role in energy balance regulation. These mutants more specifically exhibited hyperphagic behavior and obesity as a result. This exhibited obesity is most likely the result of an upset in the regulated equilibrium between caloric intake and expenditure. So BDNF, contrary to its role as a plasticity factor and neuronal growth tool, is now being seen as a key player in the neural circuits of energy homeostasis.

This is particularly interesting when thinking about child obesity. Since BDNF spurs on neuronal development during childhood, a good assumption would be that it would stunt development and thus growth. In fact the opposite happens. When there was depletion of BDNF or its receptor TrkB during the early-postnatal stage of development, mice grew up to exhibit aggressive behaviors as well as dramatic obesity.

BDNF has even been linked to the intake of food in mice. Reports by Unger suggest that signaling through the BDNF/TrkB pathway promote satiety in the adult animal. So infusion or an overexpression of BDNF into the brain of mature mice significantly reduce food intake. The reverse is also true. In times of fasting in which food is withheld from the mice, there is a decrease in the amount of BDNF mRNA in the VMH. The amount of food intake plays a role in obesity because indulgence or overeating is often a symptom of many obese people.

So when considering a place to raise a family, I would stay away from Mississippi or Texas, but the importance of BDNF promoting satiety and also regulating energy homeostasis may be something else to consider. Can neuroscience help the heavy set? Science seems to think so. Further research may prove to more specifically target the circuitry involved and help those who suffer from obesity.

Thaddeus J. Unger, German A. Calderon, Leila C. Bradley, Miguel Sena-Esteves, and Maribel Rios. ‚??Selective Deletion of Bdnf in the Ventromedial and Dorsomedial Hypothalamus of Adult Mice Results in Hyperphagic Behavior and Obesity‚?? The Journal of Neuroscience, 26 December 2007, 27(52):14265-14274
Posted by      anthony b. at 11:46 PM MST
Tags: obesity

Beauty and the Brain


Ever since life has been around, the ultimate goal has been to perpetuate life by reproduction. This goal is at the root of all life and comes as a natural instinct. From bacteria replicating in a host to tropical birds singing to potential mates, different forms of life are constantly trying to continue their lineage. Without this instinctual desire, life would be very different than how we see it today. Along with this need to reproduce, is the want to reproduce with a mate with the best genes as possible. The best way for an organism to preserve their species would be with a mate who has strong genes that will produce healthy offspring. But how could humans, for example, tell if a potential mate has genes that will help them produce healthy offspring? Recently, studies have shown that humans may make this judgement of gene state by the facial beauty of members of the opposite sex.

To determine if the recognition of facial beauty actually had something to do with the potential of the opposite person to be a good mate, researchers took heterosexual males and showed them images of average or beautiful faces of men and women. They used images of both men and women so they would be able to distinguish which area of the brain became active due to just the aesthetically pleasing makeup of the face, heterosexual men looking at other men, and which areas became active due to what the researchers called "rewarding beauty", heterosexual men looking at beautiful women. This rewarding beauty is described as the aesthetically pleasing face of a potential mate, i.e. women in the case of heterosexual men. The results of these studies, based on fMRI images of the brain, showed that there was a specific region that was active during recognition of rewarding beauty but not aesthetic beauty. This region was the sublenticular extended amygdala (SLEA). This lack of activity in the sublenticular extended amygdala leads the researchers to think that perhaps this area helps humans determine if a member of the opposite sex has desirable genes that could propagate the human species. From here, the researchers created a pathway of how they thought that a human processed this information. They categorized the observing of the various aspects of the face as the "core system" and saw that much of the activity took place in the superior temporal sulcus. The next step seemed to be a "beauty appraisal," or the step where the brain decides if the face is in the rewarding beauty category or aesthetic beauty. Depending on which direction the beauty appraisal step went seemed to show a different pathway in the brain.

These results seem to modernize the age old instinctual desire to procreate with another who has an advantageous genetic makeup and give us a present day example of why humans tend to prefer mates that are more beautiful. If nothing else, these results allow us to blame our prolonged stares at an attractive individual on our sublenticular extended amygdala.


Citation: http://www.sciencedirect.com/science/article/pii/S0896627303002939
Posted by      Kyle D. at 11:03 PM MST

This Proves We're Obsessed with Shiny Things


How does a predator within 4 seconds of scanning an environment map it, memorize it, and sort out all unnecessary information from the info needed to be able to survive? Many theories have to do with the difference between top down and bottom up visual processing. Top down processing refers to the slower, executive cognition behind vision while bottom up is fast and not consciously driven and heavily influenced by environmental cues. Those environmental cues have been studied as to their effect and also as to what exactly grants them salience, or the property that allows the stimulus to stand out against its backdrop. While many studies have already been done studying the effect of salience on such things as saccade movements in the eyes to fixation periods to mapping brain location, little to no experiments have been done trying to illuminate salience and its relationship to memory.
A simple task was devised consisting of having 12 participants focus on a scene for a brief period of time. The view is then removed from the participants and they are subjected to a wait period. Once that time is completed the participants are asked to recall the position of several figures in the scene to test their memory. They varied the difficulty of the scenes and the salience of the objects to see if there was any correlation between the two, and as it turns out there was indeed. The salience of the object was directly correlated with the performance of the participants meaning they were more successful at recalling the objects exhibiting greater degrees salience than they were recalling inconsequential items. Furthermore, they tested this with varying degrees of difficulty and found that the more difficult the recollection task was, the greater the positive effect of salience had on the performance of the participants. While one could argue that they perhaps were drawn to those items and they simply focused on those items more than the others thus increasing the chances of memorization, they mapped and timed their eye movements to measure any fixation times on the items and found no difference in the fixation times between the salient objects and the non-salient objects meaning that they spent the same amount of time memorizing each object.
To summarize the findings, they showed that humanÔŅĹ??s ability to recall objects within a certain space is positively dependent upon the salience of the object, and it is not due to any differences in memorization periods. A positive correlation between the increasing difficulty of the task and the positive effect of salience on memorization suggests that perhaps the brain may use salience to identify objects of value and omit objects deemed unimportant when the brain is forced to compromise.
They did make sure to mention another study with conflicting results. The study opted for a test involving people to assess whether a certain object was in a scene. They authors asserted that the difference in the findings could be attributed to the inherent difference in the tests, as one dealt with object identification and another with object location and spatial memory. They conclude that salience of an object and the effect it has on memory needs to be studied on a brain system to brain system basis, analyzing which systems are involved and what that would then imply.
This study provides more insight into the evolution of sight and how vision has been used and fine tuned throughout evolution. Recognition of the salience of an object is conserved throughout most species and clearly plays a pivotal role in the utility of vision as a whole. The ability to quickly asses an environment for all the information essential for survival is something that if without many animals would fall prey much more often due to lack of attention. This often taken-for-granted aspect of our vision that we are mostly unaware of is something that most certainly needs to be studied further and fully understood.
Original Article: http://www.jneurosci.org/content/29/25/8016.full
Posted by      Christopher R. at 10:37 PM MST

Your mom was right: Videogames rot your brain.


Okay, so maybe the title is a little over the top; video games don't actually cause your brain to rot, but they do appear to have a negative affect on the function of certain parts of your brain. The area of research surrounding this topic has been in the limelight over the course of the last couple of years. Some individuals were even so vehemently opposed to the presence of violent videogames in households with young children that the topic was brought in front of the Supreme Court last year. However, there has been little, if any, scientific findings that substantiated these claims until now.

The groundbreaking research behind this new assertion was conducted by members of the Wang Lab in the Department of Radiology and Imaging at the Indiana University School of Medicine. Here's how Dr. Wang described his most recent results: "For the first time, we have found that a sample of randomly assigned young adults showed less activation in certain frontal brain regions following a week of playing violent video games at home." He also stated that the aforementioned brain regions play important roles in the regulation of aggressive behavior and emotion.

In his study, Wang studied 28 healthy males between the ages of 18 and 29 that had very little previous exposure to violent video games. These males were split into two groups with one group being assigned to play 10 hours of shooting games per week, while the other group functioned as a control (no video games). Each individual was imaged using fMRI at the start of the study, and then once each week for the next two weeks. While they were undergoing the fMRI, the subjects participated in an emotional interference task, where they pressed buttons depending on the color of the words that were shown on a screen. Words that had violent connotations were displayed occasionally between nonviolent action words. Furthermore, the men engaged in a cognitive inhibition counting task.

The results of these studies indicated that after the men played just one week of violent videogames that they had decreased activation in the left inferior frontal lobe during the emotional interference task and that they displayed less activity in the anterior cingulate cortex while they were performing the cognitive inhibition counting task, when compared to the controls. These areas have been linked to control of aggression and emotion.

While the results of this study may be pretty convincing, I think it's going to take a few more studies with similar results before people are willing to give up their beloved violent videogames. Also, if these results do prove to be valid, who would want to try to separate the gamers from their videogames, what with their propensity for increased violence and all?
Posted by      Justin E. at 10:08 PM MST
  Justin Eagles-Soukup  says:
Tom A. Hummer, Yang Wang, William G. Kronenberger, Kristine M. Mosier, Andrew
J. Kalnin, David W. Dunn & Vincent P. Mathews (2010): Short-Term Violent Video Game Play by
Adolescents Alters Prefrontal Activity During Cognitive Inhibition, Media Psychology, 13:2, 136-15
Posted on Sun, 4 Dec 2011 10:09 PM MST by Justin E.

Please delete


Please delete
Edited by      Justin E. at 10:08 PM MST

The Gene That Could Speak


How have we acquired our abilities to speak? To vocally communicate a thought? Was the apparatus necessary to make a sound present before adequate intelligence? What can evolution and current comparisons to animal species teach us on the topic? Genetics has brought a new light as to a factor of language, the gene FOXP2. In Ewan Callaway's article ‚??Language gene speeds learning‚?? in Nature, it was stipulated that a mutation in a particular gene may be responsible for the evolution of language via the enhancement of muscle memory for muscles particular to the capacity of speech.
The existence of the gene FOXP2 has been known since the 90's and was reported during the study of a British family where 3 successive generations had a speech impediment associated to a mutation, allowing the expression of only one allelic copy of FOXP2. Also such developmental disorders of language where initially observed in individuals whose SPCH1 locus was translocated from chromosome 7. In this occurrence FOXP2 which encodes for a transcription factor containing a forkhead DNA-binding site was disrupted at the break point, as well as containing a point mutation coding for an invariant amino acid within the proteins' forkhead structure. For these reasons FOXP2 was attributed to play a role in acquiring expressive or receptive language or both.
To understand the particular role it plays in this acquisition, the human FOXP2 sequence was compared to that of other species. The majority of vertebrates have an invariant FOXP2 gene. However, the protein it codes for differs in two amino acids in humans relative to chimpanzees. The nearest common ancestor exhibiting both mutations, found in Homo sapiens but not chimps is the Homo neanderthalensis, suggesting the mutations appeared before the divergence of each species' lineage, 500,000 years ago. This is only circumstantial evidence of the relation between FOXP2 and the development of speech but on the other hand it has clearly been implicated as a major player in the development of brain circuits related to the learning of movement.
In fact, when the human FOXP2 gene was expressed in mice these showed advantages in learning processes comparatively to their counterparts only expressing the unaltered wild type mouse FOXP2. When challenged to find the solution to a maze the humanized rats took 8 days to get a 70% success rate and obtaining the prize while the normal rates took an additional 4 days to obtain these same results. Also, when examining the brains of each it was noticed that the humanized rats had neurons with a greater amount of dendrils and longer extensions and that neurons of the basal ganglia were faster to become unresponsive, phenomena known as long term depression and implicated in the circuits of learning and memory. Apart from these differences it was also observed that human FOXP2 mice were less likely to leave their mothers and had an altered ultrasonic vocalization.
Further investigation revealed that only one of the two differing amino acids of the FOXP2 protein was responsible for these learning enhancements. It is interesting to note that carnivores, such as canines which evolved separately from the human lineage adopted the in-affecting mutation, without obvious consequences on brain function but a clear incapacity for speech relative to humans.
To conclude and by relating these two observations, first that FOXP2 is key to adequate speech and second that it enhances learning we can hypothesize that in humans this gene may have helped in learning the movements of vocal organs necessary to speak and translate thoughts to words. However, one can argue that FOXP2 may be involved in learning the necessary movements for speech but that its effect on the learning circuits might not necessarily be responsible for attributing particular sounds to thought, eventually creating language. Nonetheless, this fact though not proven hasn't been disproved either, cumulatively making FOXP2 a valid candidate responsible for the learning processes that enabled sound to be translated into ideas.
Posted by      Brittany K. at 9:17 PM MST

Phantom Limb Pain and Cortex Reorganization


Feeling pain in the arm that you lost in an accident? Does your arm you lost in the war itch terribly? This sensation of feeling like a lost limb is still attached to the body is known as a phantom limb pain (PLP). The purpose of this study was to identify plastic changes in the somatosensory and motor cortex in patients with and without phantom limb pain. Most sensations regarding these phantom limbs are painful as if the limb was contorted into an awkward position. Although in many cases the complaint is pain, some patients experiencing a phantom limb experience sensations such as itching, burning, or feeling as though the limb is too short. Although PLP is more common in the early stages following an amputation, some have reported pain for years after. It was previously discovered that PLP had a strong correlation with representational plasticity in the somatosensory cortex; however, its correlation with the plasticity in the motor cortex was unknown. This experiment used methods such as Transcranial Magnetic Stimulation (TMS) of the motor cortex, and neuroelectric source imaging of the somatosensory cortex to study the correlation of plasticity in these cortices.

In this study, participants included five upper-limp amputees experiencing PLP and five upper-limb amputees experiencing no PLP. A German version of the West Haven- Yale Multidimensional Pain Inventory was used to evaluate each patient's stump and limb pain. To test for motor reorganization, focal TMS was delivered from a magnetic stimulator through an 8-shaped magnetic coil. The leads were positioned to cause currents to flow approximately perpendicular to the central sulcus, optimally causing the largest peak-to-peak motor evoked potential in each muscle. In patients experiencing PLP, a map of outputs determined by neuroelectric source imaging of EEGs done showed significantly larger motor-evoked outputs on the side lacking the arm than the side with the remaining arm, whereas excitability in the motor neurons of amputees remained unchanged. Since it was previously known that motor reorganization in amputees takes place at a cortical level, the leap was made that. "It is likely that cortical mechanisms are also responsible for the differences in reorganization observed in both patient groups (Karl, Anke et. al., 2011)."

While these findings support the notion that increased plasticity is present in the motor cortex of PLP patients, the evidence used to support this main point is presented in a very odd fashion. Immediately following this claim about cortical mechanisms and presenting supporting evidence, they state that their results "do not rule out the possibility of additional subcortical reorganization." This statement is saying that other factors could be causing or contributing to the claims being made by their research, thus making the research inconclusive as a whole. Another problem with the research methods is that the patient's amputations all occurred at different times. Some more recent than others, which could have a profound effect on the plasticity levels reached at the time of testing.
All in all the research conducted further supports already claimed notions, while having no real additions of any validity or originality. These limitations could be reduced by choosing patients who's amputations occurred within the same month. The potential that could be reached through studies similar to this are immense, but further research needs to be conducted in order to draw on more valuable conclusions.




The Journal of Neuroscience, 15 May 2001, 21(10): 3609-3618;
Posted by      Madelyn K. at 8:29 PM MST

Phantom Limb Pain and Reorganization of Motor and Somatosensory


Feeling pain in the arm that you lost in an accident? Does your arm you lost in the war itch terribly? This sensation of feeling like a lost limb is still attached to the body is known as a phantom limb pain (PLP). The purpose of this study was to identify plastic changes in the somatosensory and motor cortex in patients with and without phantom limb pain. Most sensations regarding these phantom limbs are painful as if the limb was contorted into an awkward position. Although in many cases the complaint is pain, some patients experiencing a phantom limb experience sensations such as itching, burning, or feeling as though the limb is too short. Although PLP is more common in the early stages following an amputation, some have reported pain for years after. It was previously discovered that PLP had a strong correlation with representational plasticity in the somatosensory cortex; however, its correlation with the plasticity in the motor cortex was unknown. This experiment used methods such as Transcranial Magnetic Stimulation (TMS) of the motor cortex, and neuroelectric source imaging of the somatosensory cortex to study the correlation of plasticity in these cortices.
In this study, participants included five upper-limp amputees experiencing PLP and five upper-limb amputees experiencing no PLP. A German version of the West Haven- Yale Multidimensional Pain Inventory was used to evaluate each patient‚??s stump and limb pain. To test for motor reorganization, focal TMS was delivered from a magnetic stimulator through an 8-shaped magnetic coil. The leads were positioned to cause currents to flow approximately perpendicular to the central sulcus, optimally causing the largest peak-to-peak motor evoked potential in each muscle. In patients experiencing PLP, a map of outputs determined by neuroelectric source imaging of EEG‚??s done showed significantly larger motor-evoked outputs on the side lacking the arm than the side with the remaining arm, whereas excitability in the motor neurons of amputees remained unchanged. Since it was previously known that motor reorganization in amputees takes place at a cortical level, the leap was made that ‚??It is likely that cortical mechanisms are also responsible for the differences in reorganization observed in both patient groups (Karl, Anke et. al., 2011).‚??
While these findings support the notion that increased plasticity is present in the motor cortex of PLP patients, the evidence used to support this main point is presented in a very odd fashion. Immediately following this claim about cortical mechanisms and presenting supporting evidence, they state that their results ‚??do not rule out the possibility of additional subcortical reorganization.‚?? This statement is saying that other factors could be causing or contributing to the claims being made by their research, thus making the research inconclusive as a whole. Another problem with the research methods is that the patient‚??s amputations all occurred at different times. Some more recent than others, which could have a profound effect on the plasticity levels reached at the time of testing.
All in all the research conducted further supports already claimed notions, while having no real additions of any validity or originality. These limitations could be reduced by choosing patients who‚??s amputations occurred within the same month. The potential that could be reached through studies similar to this are immense, but further research needs to be conducted in order to draw on more valuable conclusions.




The Journal of Neuroscience, 15 May 2001, 21(10): 3609-3618;

Psychopathic Love Stories: The Prefrontal Cortex


Psychopathy has colored literature, film, and conversation for hundreds of years due to the stark difference that exists between it and a normal, healthy personality. The scientific community has been equally fascinated with this deceptive, antisocial, and violent subset of the population that strangely lacks empathy or true, deep emotion. Socially, psychopaths are prone to violent crime and have high rates of recidivism, so a deep understanding of the brain pathology of psychopaths is necessary for creating the best social policies. Past research has highlighted the involvement of the prefrontal cortex in some of the emotional and affective problems found in psychopaths, but recently scientists in Wisconsin found a strong neural correlate of the disorder: the synaptic connections between the ventromedial prefrontal cortex (vmPFC) and the amygdala and the medial parietal lobe.

Some of the most robust studies on the vmPFC's involvement in psychopathy come from lesion studies. Patients with lesions in this area exhibit a characteristic lack of empathy, irresponsibility, and poor decision-making that mirrors common symptoms of psychopathy. Since the vmPFC mediates executive functioning, its connections to other brain areas dealing with executive function and emotion are hugely important. Two such areas, the amygdala and part of the medial parietal lobe, the precuneus and posterior cingulate cortex (PCC), were the areas of interest in the Wisconsin imaging study published last month. Previous studies have examined the same pathways, but this study had larger sample sizes for each group and was even able to test two subtypes of psychopathy: primary, or low-anxiety, psychopathy and secondary, or high-anxiety, psychopathy. Researchers examined the structural integrity and functional connectivity of white matter between the vmPFC and the precuneus/PCC, as well as in the uncinate fasciculus (UF), the primary white matter pathway between the vmPFC and the amygdala, ultimately finding that psychopaths had both lower structural integrity and lower functional connectivity than non-psychopaths.

In order to test the hypothesis that the uncinate fasciculus had lower structural integrity in psychopaths, diffusion tensor imaging (DTI), was used. DTI is a type of magnetic resonance imaging that measures the diffusion force of water through neural tracts and yields fractional anisotropy (FA) scores, which are a measure of both fiber coherence and white matter integrity at the microstructural level. As expected, psychopaths had significantly lower FA scores (and thus lower structural integrity) than non-psychopaths in both whole-brain and UF-specific examinations.

Functional MRI at rest examinations were then utilized to test whether there is lower functional connectivity in one or both of these pathways from the vmPFC. Results showed lower correlations between the BOLD signals (a measure of glucose utilization) of the right anterior vmPFC and the right amygdala, showing lower functional connectivity in the vmPFC-amygdala pathway. Lower functional connectivity was also found between the right precuneus/PCC and the posterior vmPFC.

The DTI and fMRI procedures in this study produced convergent results to highlight the importance of the vmPFC-amygdala and vmPFC-precuneus/PCC pathways in producing some of the symptoms of psychopathy. The functional connectivity of the vmPFC-amygdala pathway was shown to differentiate between psychopaths and non-psychopaths as well as distinguish between high-anxiety and low-anxiety subtypes of psychopathy. There was significantly greater functional connectivity in low-anxiety psychopaths than high-anxiety ones.

Taken as a whole, the results of this recent study suggest a strong involvement of the vmPFC-amygdala and vmPFC-precuneus/PCC pathways in psychopathy. More research needs to be done on the subject utilizing larger samples, specific task studies for fMRI, and isolation of specific nuclei of the amygdala to determine the precise role these pathways play in this fascinating emotional disorder, but the recent study adds vital information to the study of psychopathy.
Edited by      Sarah C. at 8:07 PM MST
  Sarah Cross  says:
Julian C. Motzkin, Joseph P. Newman, Kent A. Kiehl, and Michael Koenigs
"Reduced Prefrontal Connectivity in Psychopathy"
The Journal of Neuroscience, 30 November 2011, 31(48):17348-17357; doi:10.1523/JNEUROSCI.4215-11.2011
Posted on Sun, 4 Dec 2011 8:09 PM MST by Sarah C.

Psychopathic Love Stories: The Prefrontal Cortex


Psychopathy has colored literature, film, and conversation for hundreds of years due to the stark difference that exists between it and a normal, healthy personality. The scientific community has been equally fascinated with this deceptive, antisocial, and violent subset of the population that strangely lacks empathy or true, deep emotion. Socially, psychopaths are prone to violent crime and have high rates of recidivism, so a deep understanding of the brain pathology of psychopaths is necessary for creating the best social policies. Past research has highlighted the involvement of the prefrontal cortex in some of the emotional and affective problems found in psychopaths, but recently scientists in Wisconsin found a strong neural correlate of the disorder: the synaptic connections between the ventromedial prefrontal cortex (vmPFC) and the amygdala and the medial parietal lobe.

Some of the most robust studies on the vmPFC's involvement in psychopathy come from lesion studies. Patients with lesions in this area exhibit a characteristic lack of empathy, irresponsibility, and poor decision-making that mirrors common symptoms of psychopathy. Since the vmPFC mediates executive functioning, its connections to other brain areas dealing with executive function and emotion are hugely important. Two such areas, the amygdala and part of the medial parietal lobe, the precuneus and posterior cingulate cortex (PCC), were the areas of interest in the Wisconsin imaging study published last month. Previous studies have examined the same pathways, but this study had larger sample sizes for each group and was even able to test two subtypes of psychopathy: primary, or low-anxiety, psychopathy and secondary, or high-anxiety, psychopathy. Researchers examined the structural integrity and functional connectivity of white matter between the vmPFC and the precuneus/PCC, as well as in the uncinate fasciculus (UF), the primary white matter pathway between the vmPFC and the amygdala, ultimately finding that psychopaths had both lower structural integrity and lower functional connectivity than non-psychopaths.

In order to test the hypothesis that the uncinate fasciculus had lower structural integrity in psychopaths, diffusion tensor imaging (DTI), was used. DTI is a type of magnetic resonance imaging that measures the diffusion force of water through neural tracts and yields fractional anisotropy (FA) scores, which are a measure of both fiber coherence and white matter integrity at the microstructural level. As expected, psychopaths had significantly lower FA scores (and thus lower structural integrity) than non-psychopaths in both whole-brain and UF-specific examinations.

Functional MRI at rest examinations were then utilized to test whether there is lower functional connectivity in one or both of these pathways from the vmPFC. Results showed lower correlations between the BOLD signals (a measure of glucose utilization) of the right anterior vmPFC and the right amygdala, showing lower functional connectivity in the vmPFC-amygdala pathway. Lower functional connectivity was also found between the right precuneus/PCC and the posterior vmPFC.

The DTI and fMRI procedures in this study produced convergent results to highlight the importance of the vmPFC-amygdala and vmPFC-precuneus/PCC pathways in producing some of the symptoms of psychopathy. The functional connectivity of the vmPFC-amygdala pathway was shown to differentiate between psychopaths and non-psychopaths as well as distinguish between high-anxiety and low-anxiety subtypes of psychopathy. There was significantly greater functional connectivity in low-anxiety psychopaths than high-anxiety ones.

Taken as a whole, the results of this recent study suggest a strong involvement of the vmPFC-amygdala and vmPFC-precuneus/PCC pathways in psychopathy. More research needs to be done on the subject utilizing larger samples, specific task studies for fMRI, and isolation of specific nuclei of the amygdala to determine the precise role these pathways play in this fascinating emotional disorder, but the recent study adds vital information to the study of psychopathy.
Edited by      Sarah C. at 8:06 PM MST
  Sarah Cross  says:
Please delete this - it told me it was unable to post the blog and ended up posting it twice.
Posted on Sun, 4 Dec 2011 8:03 PM MST by Sarah C.

Dopamine: Dictating Dangerous Decisions


Decisions shape lives. Every day in the news, we are berated with stories of people who‚??s lives were changed due to decisions they made, both good and bad. It seems plausible that the most rewarding outcomes are often accompanied by a risk of adverse consequences. So what is it that dictates how humans evaluate risk and reward? How do people decide if something is ‚??worth the risk‚??. Why are some people seemingly better at making decisions than others? A study published on November 30th 2011 in the Journal of Neuroscience provides some insight into how different dopamine receptor subtypes are involved in evaluating risk behavior and making decisions.
The experimenters investigated the role which dopaminergic subunits may play in helping people to evaluate scenarios and make beneficial decision. For example, they found that by systematically activating D2-like receptors in rats, risk taking behavior was substantially diminished. Unlike D2 receptors, activating D1-like receptors had no significant effect on risk taking behavior in the rats. Additionally, lower levels of D2 mRNA in the dorsal striatum were associated with increases in risk taking behavior among the rats being tested.
The experiment provided the rats with a choice between a small reward with minimal risk as opposed to a large reward with greater risk. The greater risk for the large reward was presented in the form of electric shocks to the rats feet. Rat‚??s who received the shock also received three times as much sucrose as the rats who settled for a third as much sucrose with no shock. Motivational tests were used to attempt to ensure that the rats desired the higher dosage of sucrose more than the lower dose. D1 and D2 probes were used to analyze mRNA expression in the rats.
This type of experimental procedure can be problematic because there is a high degree of variability in the preferences of rats in relation to risk and reward. Although certain correlations drawn by the experimenters may be considered questionable for this reason, the experiment does attempt to relate the experimental results to risk=-taking behavior. Unlike previous works which found rat performance to be stable, this experiment found that rat performance could change in nature ranging from strongly risk averse to strongly risk taking. The procedure was complicated and based on many experimental procedures which were not well explained. For example, what baselines did the experimenters use in their motivational tests and how were these type of arbitrary statistical markers determined? Throughout this paper, there were manipulations which were apparently done systematically, yet without more information on what ‚??systematically‚?? means in specific scenarios, it is difficult to draw conclusions regarding the legitimacy of the procedures, and thus their actual relevance to risk taking.
Regardless of these concerns, this experiment does attempt to further investigate how the dopamine signaling pathway is involved in risk taking. Understanding the implications of these pathways may provide explanations as to how decision making processes can be altered in the cases of psychological disorders. These types of disorders are often classified by abnormalities in decision making and risk taking behavior, and consequently, individuals who suffer from them often have a very hard time making the decisions or performing the behaviors necessary to succeed in their endeavors. Furthermore, if more is determined about these pathways, it may also be possible to investigate why drugs cause people to make bad decisions from a scientific perspective. For now, when conflicted with a tough choice, each person still must use their personal preferences and insights to determine what decision to make. Perhaps, further research can help to discover exactly how dopamine receptors can be manipulated in order to assist struggling individuals in their decisions.
Posted by      Aaron R. at 5:15 PM MST
  Aaron Ramras  says:
Simon, Nicholas W., Karienne S. Montgomery, Blanca S. Beas, Marci R. Mitchell, Candi L. LaSarge, Ian A. Mendez, Christina Banuelos, Colin M. Vokes, Aaron B. Taylor, Rebecca P. Haberman, Jennifer L. Bizon, and Barry Setlow. "Dopaminergic Modulation of Risky Decision-Makin." Journal of Neuroscience 31.48 (2011): 17460-7470. Web. 4 Dec. 2011. .
Posted on Sun, 4 Dec 2011 5:26 PM MST by Aaron R.

Religious Brains Doing Some Good After All


Although it has been well established that all of our sensory experiences originate from our brains, many still demand that we cannot account for religious extrasensorial experiences that go beyond the limitations of our physical bodies. It is often suggested that our spiritual self is something that remains independent of our physical bodies, and one's faith is something that will remain with the dedicated few throughout their lifetime. While these views on spirituality and religiosity are still held as something so unshakeable and unquestionable as one's DNA - Italian Neuroscientist Dr. Urgesi and colleagues have uncovered yet more data that suggests a link between the very physical parts of the human brain with the creation of these extrasensorial experiences.

The study sought to find a relationship between neural activity within pathways that unify the parietal, frontal and temporal cortices with spiritual (i.e. religious) behavior and beliefs. The temporoparietal areas have already been associated with religiosity after studies on patients with frontotemporal dementia and epilepsy displayed marked increases in spiritual and transcendent beliefs not held prior to their diseased state. These spiritual and transcendent beliefs were measured using the Temperament and Character Inventory personality examination that includes a sub-scale which measures self-reported amounts of Self-Transcendence (ST) displayed by the individual. ST levels were based on responses to questions regarding one's ability to undergo self-forgetfulness and transpersonal identification that goes beyond spatio-temporal dimensions of the human body i.e. experiences with God.

Dr. Urgesi and colleagues performed their study on a group of patients suffering from tumors involving the prerolandic (anterior subgroup and control group) and temporoparietal structures (anterior subgroup.) Dr. Urgesi combined examinations prior to the removal of tumors with MRI to classify participants into groups with High Grade, Low Grade, and Recurring Glioma, as well as a control group of patients suffering from Miningioma. Dr. Urgesi found that pre-surgery, patients with tumors of the posterior subgroup showed higher ST scores amongst the High Grade Glioma and Recurring Glioma group, while no differences between posterior and anterior subgroups were observed amongst the Low Grade and Meningioma groups prior to surgery.

Examinations after the removal of the tumors inferred varying degrees of lesions to the cortical structures of the participants. Dr. Urgesi and his colleague s primary finding was that the ST values of patients with a Low-Grade glioma removal of the posterior subset, showed unusually rapid and long lasting changes in their ST scores post surgery. The same was found in the High Grade and Recurring glioma posterior subgroups. As expected, removal of tumors from non-cortical structures inferred no changes in ST values amongst participants of the Meningioma or anterior groups . Dr. Urgesi suggests that the damaged posterior parietal areas may contribute to altered spiritual beliefs and behaviors. Luckily for those with high ST marks, follow up interviews showed that patients with anterior lesions showed no increase in ST but also showed much less acceptance of their grim conditions than did the posterior /high ST group.

The implications of these findings extend far beyond religious views though, as Dr. Urgesi claims that if such steadfast beliefs and behaviors towards religion and spirituality can be rapidly and drastically altered from mild lesions sustained after brain surgery, then further research into lesion induced personality changes may shed light on other personality disorders such as schizophrenia and bipolar disorder.

So just how dependent on the physical body are these steadfast personality traits, moral views, and religious beliefs when they can be altered simply by changing the wirings within our brain? Maybe it seems that our seemingly concrete religious and moral views that we stand beside throughout our lives are more trivial than we give credit to. Our unshakeable views of the world don t seem to be so deep-seated after all! How rapidly may I covert to Islam or Christianity the next time I hit my head on the concrete? I guess I will just have to wait to find out... but luckily, my lesion induced changes in my circuitry may help me to better cope with the impending doom that my very damaged brain has brought me in the first place.

Urgesi, Cosimo , Salvatore M. Aglioti, Franco Fabbro, and Miran Skrap. "The Spiritual Brain: Selective Cortical Lesions Modulate Human Self-Transcendence." Neuron 65 (2010): 309-319. Print.
Posted by      Tuttle J. at 5:02 PM MST
Tags: religion

The Joy of Laughter


Laughter. Its something we humans do almost on a daily basis in order to express pleasure yet it is composed of a series of grimaces and loud shrieks. How is it that such a strong, blissful emotion can be connected with such obtuse behaviors? Furthermore where does this feeling of joy come from? The scientists at Stanford say they have it all figured out.

In the December 4 2003 issue of Neuron a study done by the Stanford University School of Medicine asserted that laughter and humor activate the mesolimbic dopaminergic reward system. In this study sixteen adult subjects viewed 42 funny and 42 non-funny cartoons in a random order and were asked to press a button depending on if they found the cartoon funny or not. Prior to the experiment a separate group of subjects with a background similar to the test group chose 42 of the funniest cartoons from a selection of 130 cartoons. 42 non-funny cartoons were then found to match these.

In order to find the areas of the brain that were active when a cartoon was presented to the subject an event related fMRI (efMRI) was used. The areas were determined active if there was an increase in blood flow in that region of the brain. The unpredictable nature of random efMRI designs, the fact that activation was examined on a subject-by-subject and cartoon-by-cartoon basis, an the use of post scan humor ratings ensured that pure reward was being measured while consideration and measurement of individual differences in humor were taken into a account.

The researchers discovered that the regions activated included the ventral tegmentum area, nucleus accumbens, and amygdala, all which are vital to the mesolimbic dopaminergic reward system. Other areas such as the supplementary motor area, dorsal anterior cingulate cortex, and inferior frontal gyrus (including Broca's area) were also activated in the left hemisphere which suggests that this hemisphere plays a large role in the processing of reward and positive emotional stimuli. It also suggests that this hemisphere is responsible for the physical display of humor such as smiling and laughter.

Thus when we laugh, we do so because of the release of dopamine which causes the feel good feeling and stimulates the necessary areas that cause the actual behavior of laughing. Dopamine also keeps us laughing due to the reward system it employs.

These discoveries make it is possible to further studies on the use of laughter as medicine. One possible way to study if laughter has beneficial effects is through the use of optogenetics. By activating the areas discovered here with optogenetics, it would be possible to measure the effects laughter has on the immune and cardiovascular systems. It would also be possible to see if laughter could be used to effectively treat forms of depression that are due to a lack of dopamine release within the brain. Another, more necessary study using optogenetics would be to simply test if these areas alone account for humor or if it is the combination of the areas that make something appear funny. By doing these tests it would be possible to see if laughter really is the best medicine or if it is simply a social construction that promotes good feelings.

Citation:

http://www.sciencedirect.com/science?_ob=MiamiImageURL&_cid=272195&_user=10&_pii=S0896627303007517&_check=y&_coverDate=2003-12-04&view=c&_gw=y&wchp=dGLzVlS-zSkzV&_valck=1&md5=2af750b3e08a955b3e8f9c81abfaadc2&ie=/sdarticle.pdf
Posted by      Mari W. at 4:31 PM MST

Practice makes perfect- Training your brain for music


Were you ever forced to learn an instrument as a young child? Did you ever hear the dreadful words "have you practiced today" or did you have to have your parents sign papers indicating that you did indeed practice 1hr of flute each day, so that you may receive an A in music class. Or were you one of the fortunate children who actually enjoyed playing an instrument?
The common notion is that practicing music has beneficial effects. In addition we often say that musicians are wired differently, that they approach problems differently. But what does that mean in the neuroanatomical sense?
A study by Christian Gaser and Gottfried Schlaug compared brain regions of musicians and non-musicians with the voxel-by-voxel morphometirc technique to try and uncover anatomical differences amongst the two groups' brain structures.
Their approach was to say that musicians learn certain motor and auditory skills in their musical practice, and that such learning would evoke some difference in the brains of adult musicians compared to non-musicians. Their results provided grounds that there was indeed a difference in brain anatomy between the test subjects, a volumetric difference in the gray matter. Musicians had a larger gray matter in the motor, auditory and spatial-visual areas of the brain than non-musicians. However the researchers were unable to determine whether or not this difference was predisposed or acquired. The researchers suggest that the difference in gray matter volume is induced through practice rather than being predisposed, however they were unable to prove their hypothesis since their experiment did not specifically focus on the issue.
Several years later one of the researchers, Gottfried Schlaug, teamed up with several other researchers to focus on the brain development of young musicians. This experiment measured the regional brain plasticity of young children. One group received musical training for 15 months while the other didn't. Their results indicated that children with musical training did indeed have a greater voxel size expansion meaning it diverged from the typical brain development.
Even though the results indicated that musical training does result in increasing gray matter of certain anatomical regions in the brain, the researchers could not completely rule out the idea of a genetic predisposition. Meaning the question whether nature or nurture is responsible for the volumetric difference, still stands. Do we have to be born a musician or can we learn to be one. Either way, both papers seem to indicate that there are beneficial factors to learning an instrument at a young age. So for those of us who were forced to learn an instrument, it indicates that no harm was done at least not in the conventional sense. A fear from pianos (pianophobia)_or other instruments (instrumentophobia) due to horrid enslaving teachers is a different story, one that would take us more into the direction of psychology. But if your parents are still disappointed that you didn't turn out to be a great musician, just indicate that nature might still have a role and that maybe you just weren't meant to be the next Beethoven.



Original Sources:
Gaser, C., Schlaug, G; (2003). Brain Structures Differ Between Musicians and Non-Musicians. The Journal of Neuroscience. 23.27.

Hyde, K. L., Lerch, J., Norton, A., Forgeard, M., Winner, E., Evans, A. C., Schlaug, G., (2009). Musical Training Shapes Structural Brain development. The Journal of neuroscience. 29, 10.
Posted by      Rebecca v. at 3:27 PM MST
  play game  says:
There are the many web users online for visit here to bing delete search history on the browser this is the process for provide the function became to speed fast so thanks for the update.
Posted on Mon, 8 Oct 2018 12:05 AM MDT by play g.

Reasons You Should (not) Text and Read


Tap tap tap tap tap Bam Bam Bam *moaning* TAP TAP TAP TAP BAM BAM BA-BAM BAM BAM!

Take a page out of the Dr. Chun and Johnson book: if your roommate is having wild and kinky sex just next door, find someplace else to write your Civil War research paper. Keep in mind, this advice does extend beyond sex as a distraction and a research paper as a task. In the November 17, 2011 issue of Neuron the review Memory: Enduring Traces of Perceptual and Reflective Attention made several assertions about the aggrandizing body of literature concerned with the networks involved in and interactions between attention and memory. Research concerned with the dynamic interplay of memory and attention, currently, is sparse; until very recently, neuroscience research has focused either on attention or memory. Lately, however, researchers have found that their results about attention or memory phenomena cannot be explained without more information about how they are conjoined. The purpose of this review was to assess advances that have been made, possible applications of the results, and hypotheses to be tested in future studies.

Looking back at the poor sap listening to his roommate get it on not five feet away, while he is attempting to concentrate on how Union soldiers mistreated Confederate women and children, made me wonder if he can pay any attention to the task he is supposed to at the moment (his paper). Thankfully, I do not have to conduct a research experiment myself to see if he will succeed: this question was already answered in A general mechanism for perceptual decision-making in the human brain. The answer is simple: if the task you are concentrating on is easily accomplished then the amount of attention you need to devote to it is low (low load), and, unfortunately, distractions will impact your efforts much more than if the task is not easily accomplished. If the task is difficult, the cognitive load will be high and distractions are not as likely to detract from your concentration. I think the paper he is writing has a high cognitive load, but I also am inclined to think that the amount of sensory input he is getting, from task-irrelevant sources, is not low on the cognitive load scale and, therefore, his reflective attention on the paper will suffer.

Beyond informing us how to respond to demanding situations, this review reflects on various findings that have been made, and steps to be taken, in the exploration of the pathways implicated in memory and attention. A major discovery that was made recently, (July 2011) by a conglomeration of researchers from the Netherlands, is that, not only do attention and memory interact, memories of images, (reflective representations specifically) when retrieved, activate the same pathways as though the image was seen twice. The implications are clear: that picture in your head of your long lost lover perfectly replicates what he or she looks like in real life, right? Not quite, all that Oliver et al. discovered is that the same pathways are activated in the perception and recall of a dot or shape, which cannot be extrapolated any further.

However, that is not to say that none of the studies in this review came to similar conclusions; a few even arrived at conclusions, and observed results, that are salient to the human condition. A few of the more scintillating results include, but are not limited to, the fact that when we are not distracted the amount of and detail in which we remember information is extraordinary (implications for people with ADD/ADHD); the harder a task actually is, the more likely we are to focus on it than on distracting stimuli (studying habits); and, the ability of older adults to enhance memory (learn new things), while simultaneously being unable to distinguish false memories from true memories, and remember salient information from the past (memory loss due to aging).

The study of memory and attention interactions is new and, because of the information already gleaned from studies focused solely on attention or memory, certain questions can already be answered about their interactions. I, like the Civil War historian listening to his obnoxious roommate slam his way to a TBI, am not satisfied to simply sit around and listen (in my case about studies that have been performed, in his case sex). I am interested to learn more about attention-memory interactions and, someday, contribute to this fascinating field of study.

Now, who is ready for a pop quiz on the interactions of memory and attention?

Source: http://www.sciencedirect.com/science/article/pii/S0896627311009615
Posted by      Christina U. at 3:00 PM MST
  aidan mary  says:
I really want to take your test, what should I do?
starjack io
Posted on Thu, 15 Aug 2019 9:49 PM MDT by aidan m.

December 3, 2011

Drinking on the Job: How Flies get Drunk


Thursday, Friday, and Saturday night... I know what you're thinking. No class till Monday, no work, what a great night to get ahead on studying and up to date with all the problems in the world. However, I must point out this plan is not the first thing that pops into everyone else's mind (at least those outside the world of the poor soul who is reading this neuroscience blog). Much of western society is based around the beverage/drug/poison we've come to know as alcohol. It has come to the attention of neuroscientist that our race is not the only one that takes pleasure in consuming firewater. It turns out some researchers were playing with the old 160 proof lab ethanol when they came upon an astounding discovery.

It all started when one turned to the other and croaked, "I'm drunkk frog haha." The other slurred back, "weelll thenn, gooood thing I'm not a fly huh?" That's when it hit them. "Eureka!" piped the first. "Oh my god!" yelled the second. "Let's" get the flies wasted!" the second hollered back. They quickly spun off their lab stools and bustled for the fly room stumbling and tripping the whole way. When they got to the room they immediately grabbed the first beaker of flies, ripped out the cork and filled it full of the powerful booze, instantly killing all the flies inside. Once they realized the horrendous massacre they had just committed in front of all the hundreds of thousands of other flies in the room their drunken smiles slipped off. The beaker was placed on the counter as the two somber scientists held each other with silent tears streaming down their cheeks. Then one started laughing; irritated, the other muttered, "How can you laugh at a time like this? We just killed them, in front of their families... drowned them, squashed them like flies... "Look, that one's drunk," the other researcher pointed at a fly that was clearly not adhering to the standard sober drosophila flight pattern. They watched the fly for nearly two hours, they sat on the fly room floor entranced by the fly's drunken escapades. Then as its flight pattern began to return to normal it headed back to the beaker full of booze, and began gulping down, without a thought to the dead brothers, sisters, cousins and children floating on top. Gleeful laughter burst from the researchers as they cheersed and began taking large quaffs of their own. Quickly forgetting their bloody hands they then began pulling the corks of the other beakers, filling up petri dishes with ethanol, and pipetting small volumes of ethanol in for the larvae--so no one was left out. They spent the whole night at the lab with their new found drinking buddies and had a gay old time. A few days later after their handover was gone they decided to write a paper.

It was determined drosophila liked the inebriation caused by excessive consumption of ethanol. Like us, the flies were experiencing their pleasure through the activation of the dopamine pathway. Activating this pathway induced LTP in the flies. Looking further into the flies' neural circuitry the researchers determined the rewarding memories the flies experienced (or the lack of memory if they got too plastered from not getting enough sugar before) were localized, accessed and retrieved with a distinct set of neurons in the mushroom body. With the vast number of flies they got drunk the researchers' found some flies didn't come back to drink. The experimenters were obviously offended and quickly squashed them. However, they didn't stop there; they proceeded to analyze the DNA so they could breed out the bad gene and make sure no other flies would be lame. They found mutations in scabrous were responsible. They commonly call it the party pooper gene around the lab. "This gene encodes a fibrinogen-related peptide that regulates Notch signaling, disrupted the formation of memories for ethanol reward" (Kaun, 2011). The experimenters have been thought to have had a little bit too much fun drinking with the flies, but they have felt the public pressure. Now they're looking into how this research will help their own species and we will undoubtedly be hearing more from them soon.

Hope you enjoyed the read, sincerely Charlie Stewart

"A Drosophila model for alcohol reward"
Karla R Kaun, Reza Azanchi, Zaw Maung, Jay Hirsh & Ulrike Heberlein
Nature Neuroscience April 17th 2011
Posted by      Charlie S. at 8:15 PM MST
  peter pen  says:
Such the great post i really to visit the best way for installing free windows dvd player software and must be thanks for the really update thanks.
Posted on Sat, 27 Apr 2019 4:14 AM MDT by peter p.
  Julian Julian  says:
We all happy to get neuro science classes to know how to get drunk services. The following resources you can easily gather for the review of papersowl.com writing service media entries. Then after we will get the most useful neuro science responses.
Posted on Sat, 14 Sep 2019 9:20 AM MDT by Julian J.

Is it smart to believe in God?


Is it smart to believe in God? Recent science would suggest no. In a study comparing religiosity to intelligence across 137 nations (representing just over 95 percent of the world population) there was found to be a negative correlation. The study reported the correlation between national IQ as measured as psychometric g (the general factor of intelligence) and disbelief in God is 0.60.

It has long been speculated that religious belief and intelligence are negatively correlated. In a review of 43 studies conducted by Bell (2002), it was found that all but four showed a negative correlation.

This particular study chose to evaluate IQ and religious belief internationally using four sources of evidence: (1) negative correlations between religious belief and intelligence; (2) lower rates of religiosity among the top intelligence tiers as compared to the general population; (3) a decline of religious belief as cognitive functions mature with age; (4) a decline of religiosity as general intelligence is increased among populations in the twentieth century.

This data is explained in a variety of ways, although most speculate that people of higher intelligence are more skeptical and prone to ask questions rather than accept faith blindly. The scientists conducting this experiment believe that religion is a social construct designed to explain the unexplainable and to serve as a justice system where previously there was none. But in the modern era, people are less reliant on religion and becoming more reliant on science.

There are few exceptions to the linear relationship between IQ and religious belief although a few should be noted. The two most outlying countries are Cuba and Vietnam which both have higher percentages of general disbelief (40 percent and 81 percent respectively) which would not be expected by analysis of their national IQs of 85 and 94 respectively. This anomaly was attributed to the fact that former or current communist countries are known to utilize strong atheist propaganda. Also, the United States is noted as being highly religious (only 10.5 percent are recorded as not believing in God), when compared to the national IQ of 98. This reflects a general trend among North West and Central European countries. One possible explanation is that Catholicism is highly prevalent among these countries, and predominately Catholic countries tend to show lower levels of disbelief than predominately Protestant countries. Another possible explanation could be the high levels of immigration to these areas from countries with high levels of religious belief. In fact, many of the immigrants from Europe to the United States immigrated in search of religious freedom. There have been a multitude of studies relating religiosity to a genetic factor or relating the transmittance of religious belief from parent to child, so it can be assumed that deeply religious immigrants passed on their religious traditions through generations.

Original Article: Lynn, R., Harvey, J., Nyborg, H., (2009). Average intelligence predicts atheism rates across 137 nations. Intelligence. 37, 1, 11-15.

Also Cited: Bell, P., (2002). Would you believe it?. Mensa Magazine. 12-13.
Posted by      Samantha H. at 5:51 PM MST
  Charlie Stewart  says:
If ignorance is bliss, maybe God is blessing the faithful with stupidity and the rest of us have to deal with the hell of knowing what a jacked up place we really live in?
Posted on Sat, 3 Dec 2011 6:46 PM MST by Charlie S.

Deep Brain Stimulation - A Different Approach


Let's talk about Parkinson's Disease (PD). PD is one of the most prevalent neurodegenerative disorders in the world for people over the age of fifty; as the population has aged and people have started living longer, PD diagnosis has increased significantly. Some of the symptoms include tremor (shaking), akinesia (inability to initiate movement), muscle rigidity, and later in the progression of the disease, slowed speech, blank staring, and dementia. So there are motor and cognitive problems associated with PD, but this post will deal with the reduction of the earlier-onset motor symptoms.

There are two main types of treatment for PD motor symptoms: drugs and deep brain stimulation (DBS). The drugs usually have a compound called L-Dopa, which is a precursor in the formation of the neurotransmitter dopamine, the lack of which has been implicated in inducing the motor symptoms of PD.

The other treatment, DBS, can only be used on some patients, namely those for whom the drugs have had little to no effect and who are also healthy enough to undergo surgery and stay alive for longer than a few years. What they do is create an open-loop (i.e. not a closed circuit) by surgically inserting an electrode into a specific region of the brain, which is connected to a pacemaker-type device called a pulse generator (IPG) that they insert below the neck. Doctors then set the IPG to a certain frequency, so the electrode will send out electrical signals into the brain every so often, and this has been shown to reduce the motor symptoms of PD.

The whole process takes up to a year, with the surgery and adjustments to the properties of the stimulation being continuously modulated until the motor symptoms are reduced and the side effects of the stimulation are not too severe. But, since PD is a progressive disease, the motor symptoms will continue getting worse and the DBS stimulation will continue to need adjustments more and more frequently as the disease progresses.

The issue with this is that no one really knows why DBS works, so all of the adjustments are guesses (systematic guesses, but guesses nonetheless) and patients need to keep coming into the hospital, which costs a lot of money, time, and frustration when their symptoms are not relieved. This is where research in the "closed-loop," or real-time adaptive, DBS comes in. This potential form of DBS also involves a chip that is used to record when natural electrical signaling occurs in the brain region the DBS electrode is in. The recording then sets the properties of the DBS stimulation on the IPG and sets a timer for when the electrode will deliver that stimulation, producing a feedback loop, and decreasing the need for the constant hospital visits.

One study found that given some specific criteria for the wait-time and the number of stimulations, some PD symptoms were reduced (namely akinesia) to a greater extent than with open-loop DBS. However, the study also found many problems with using closed-loop rather than open-loop DBS to alleviate motor symptoms, but these problems have to do with what actually causes the motor problems associated with PD. As such, the study concludes that with more research on the efficacy of closed-loop DBS and on the details of the cause of PD motor symptoms, closed-loop DBS could be used as a potential treatment for PD that will produce not only a more significant reduction of motor problems, but will also enhance the long-term efficacy of using DBS as PD progresses.

Rosin, B., Slovik, M., Mitelman, R., Rivlin-Etzion, M., Haber, S. N., Israel, Z., . . . Bergman, H. (2011). Closed-loop deep brain stimulation is superior in ameliorating Parkinsonism. Neuron, 72(2), 370-384. doi:10.1016/j.neuron.2011.08.023 Retrieved from http://www.cell.com/neuron/abstract/S0896-6273%2811%2900776-8
Posted by      Anna G. at 11:58 AM MST

Accidental double post


Accidental double post
Edited by      Kyle K. at 4:38 AM MST

Buried Alive: Finding cognizance in unresponsive patients


It's a horrible thought, one that undoubtedly haunts people's nightmares: the idea of being buried alive. The idea that someone might be taken against their will and forced, alive, into the depths of the earth, is terrifying. Personally, I have a similar but slightly altered phobia: asylums. The thought of being held against my will and, by physical or pharmaceutical restraints, forced to submission, scares the bejeebies out of me.

Many people in hospital rooms all over the world are facing a fate not unlike these scenarios. Over the years, evidence has accumulated that many of the people diagnosed as being in a persistent vegetative state, or coma, are at least somewhat conscious. That is, these people are trapped inside their own bodies - awake, but in such a state of unresponsiveness that they are thought to be in a coma! These people are not a small minority, either. In fact, as many as 43% of people diagnosed as vegetative are later reclassified as at least minimally conscious.

A new paper published in the November 10th issue of The Lancet outlines a very accessible new method for accurately diagnosing these people. Doctors used "bedside" electroencephalography (EEG) to look at the brains of sixteen vegetative patients. The doctors asked the patients to envision producing simple motor actions at precise times. Three of the patients "were found to be aware and capable of substantially and consistently modulating their EEG responses to command". The study was designed so that any response by the patients required higher order "top down" brain function - meaning that responsiveness was indicative of at least some level of cognizance.
This study is very important, not for it's novelty, but for it's practical implications. Functional magnetic resonance imaging (fMRI) has been used for a while to recognize cognizance in unresponsive patients. In fact, EEG has also appeared in the literature as recognizing cognition in individual unresponsive patients. This study is important in that the doctors used a group of patients (rather than a single patient) to show that EEG can be used as an effective replacement for fMRI.

For unresponsive patients, there are a number of potential hurdles that that make using fMRI difficult or impossible: cost, scanner availability, the physical stress associated with traveling to a suitably equipped fMRI facility, movement artifacts, and interference from metal implants (present in many brain damaged patients) are all serious hindrances to using fMRI. EEG, on the other hand, is much less expensive, is unaffected by metal implants and, possibly most importantly, can be done at the bedside. The method introduced in this study can both bring brain imaging to patients who do not have access, and can supplement or replace the current system of diagnosing a vegetative state.
Currently, vegetative state is diagnosed by a team of specialists who use the coma recovery scale-revised (CRS-R) to access the patient's auditory, visual, motor, oromotor, communication, and arousal functions. This method is highly subjective and prone to misdiagnosis. Each of the patients in this study, including the three cognizant patients, met the CRS-R requirements for vegetative state.

While the prospects of this study are exciting, there are limitations, say the authors. Of the twelve healthy controls in the study, only 9 (75%) were capable of producing brain activity that met the study's criteria for responsiveness. This means that, while using EEG can recognize cognizance, its failure to do so is not indicative of a lack of cognizance.

The authors of the study say that this method can lead to new technologies that allow the patient to interact with their environment and loved ones in meaningful ways. For some patients, this study might be the first step in regaining some form of freedom, some sort of escape from the prison of their bodies.


article link: http://www.sciencedirect.com/science/article/pii/S0140673611612245
Edited by      Kyle K. at 4:41 AM MST
Tags: coma, eeg, vegetative

December 2, 2011

Resonance among corporeal bodies: it might just exist in humans


"Self-construal" refers to how individuals view and make meaning of the self; at least two subtypes have been identified. Interdependent self-construal is a view of the self that includes relationships with others, and independent self-construal is a view of the self that does not include others. An individual's adoptive cognitive processing style with regard to context sensitivity is thought to be affected by the priming of these two types of self-construal. Simply put, the way a person thinks is influenced by how sensitive they are to their immediate context; priming interdependent or independent self-construal affects an individual's contextual sensitivity and by extension how an individual consequently thinks.


We affect how we think.


Okay, so that's not something new. The interesting thing is the notion that context sensitivity affects motor resonance among corporeal bodies. Yes, I'm talking about the human body and yes, we exhibit resonance. Apparently.


If you're having a hard time swallowing that idea for the first time (or if you're like me and find it intriguing in a nerdy way), perhaps a better way of thinking about it is a sort of 'subconscious chatter' of an individual's behavior emanating out from their body and, depending on how responsive we are to these continuously sent little packets of information, we subconsciously "resonate" the chatter in our own bodies in a social setting. It seems to me that resonance is another way of looking at the nonconscious mind and its effects on our behavior in a way we wouldn't normally think about.


A recent article published in The Journal of Neuroscience presents the case that motor resonance occurs between corresponding muscles in two individuals (at least in a passive observation activity conducted in the study). Ten participants (five male, five female; age range 18-39 years) were subjected to focal transcranial magnetic stimulation (TMS) of contralateral motor cortex while watching a video superimposed by an interdependent self-construal prime word, independent self-construal prime word, or no prime word. Focal contralateral motor cortex TMS elicited motor-evoked potentials (MEPs, amplitudes adjusted to ~1 mV at baseline fixation-cross control condition) measured from the abductor pollicis brevis (APB) muscle [the muscle of your palm attached to your thumb] of the participant's right hand. The 'motor resonance' part of the study was the passive observation of the video that showed a model contracting the APB muscle to squeeze a rubber ball between the index finger and thumb. Interdependent priming-elicited MEPs with a greater amplitude than the unprimed action showed greater motor resonance (presumably due to increased context sensitivity), and independent priming-elicited MEPs with a smaller amplitude than the unprimed action showed less resonance (presumably due to decreased context sensitivity).


They found that observation of the videos regardless of the priming condition facilitated MEPs of greater amplitude compared with the baseline fixation-cross condition (no-priming and interdependent priming condition MEP increases > independent priming condition). Little surprise there; watching a video rather engages more thought than watching fixed crosshairs. Interdependent self-construal priming facilitated motor cortical outputs beyond the unprimed-induced facilitation, and independent self-construal priming relatively suppressed unprimed-induced facilitation. Interdependent self-construal priming effects motor resonance; independent self-construal somewhat depresses motor resonance.


That's pretty interesting. So how does that tie to the whole corporeal resonance-subconscious body-to-body chatter thing?


The underlying idea is behavioral mimicry in social settings; 'contextual motor resonance sensitivity' mediates nonconscious mimicry in social settings, presumably involving the mirror neuron system (appropriately named). We resonate with other individuals on some level depending on our sensitivity to those around us. This implies that the reason why we imitate or mimic other individuals' behaviors and actions is not necessarily because we might under the influence of something and more sociable (disinhibited) from how we normally act or but rather being brought to a more resonance-receptive state/less resonance-unreceptive state; how we are brought to a more receptive state is through priming (by ourselves, others, quotes, environment, etc.). Conversely, priming also takes us farther from resonance reception/stronger resonance resistance. This article concludes that the study therefore supports the idea that motor resonant systems in the human brain mediate behavioral mimicry.


A little more on the mirror system. Complications with the mirror neuron system whether deficits or other abnormalities may play a role in disorders of excessive or reduced social influence, such as individuals with autism spectrum disorders, compulsive imitation, or psycho-pathic personality traits. Novel therapeutic interventions based on the findings of this study may benefit such patients greatly, and may even benefit us as well. Inducing interdependent self-construal could potentially make learning by observation more efficient.


Do you think resonance is the reason why we feel smarter when certain people stand next to us (or is that a bit too far of a stretch...)?



Link to the article: http://www.jneurosci.org/content/31/41/14531.full
Edited by      Patricia W. at 10:09 PM MST
  peter pen  says:
It is the best inform here i am playing the generate unlimited imvu credits online card game here more the online players play this game free.
Posted on Sat, 27 Apr 2019 4:17 AM MDT by peter p.

Wrong post - please delete


Wrong post - please delete
Edited by      Patricia W. at 10:12 PM MST

Wrong post - please delete


Wrong post - please delete
Edited by      Patricia W. at 10:12 PM MST

Wrong post - please delete


Wrong post- please delete
Edited by      Patricia W. at 10:13 PM MST

Wrong post - please delete


Wrong post - please delete
Edited by      Patricia W. at 10:10 PM MST

Online NRSC 2100- Is it a Good Idea?


Over the last semester we have all participated in a class with a very different learning format from that which we are used to. Whether we signed up for an online class or not, almost all of the educational content of this class has been presented online. Independent, online learning, presents a very different learning experience than the traditional university course. Rather than seeing and hearing a professor lecture and discussing our learning in a social, classroom setting we have obtained most of our information through online textbooks, tutorials and videos and have discussed it using Facebook, Hootcourse and this blog. The question is: Is this new form of education that does not revolve around the face-to-face social experience between a teacher and a classroom bring the same benefits? Is social interaction important for learning? Do the social capabilities of the internet (i.e. Facebook) sufficiently replace in-person communication?
In her article, "The Developing Social Brain: Implications for Education, (http://www.cell.com/neuron/fulltext/S0896-6273(10)00173-X )" Sarah-Jayne Blakemore explores the research that has been done on the role of social interaction in learning. Humans have a social brain; we are capable of intuitively knowing what certain facial expressions and body language mean. Babies developing language skills depend on social interaction for learning. Blakemore highlights a study (Kuhl et al., 2003 ) in which American babies are exposed to Chinese Mandarin through three different methods: 1) social interaction (reading and playing) with a native speaker, 2) videos of that same speaker or 3) audio recording of that same speaker. The only group that displayed the learned ability to distinguish between Chinese sounds was the group that experienced social interaction. The benefits of social interaction in learning are not yet understood. It could be that the infants are more motivated by social interaction or that the adult speaker is able to tailor their behavior to the child's needs in a social experience.
This doesn't necessarily point to the absolute necessity of social interaction for academic learning; language acquisition is different from the type of learning done in a university classroom and the age of the participants and their brain development is significantly different from that of the typical student enrolled in this class. Blakemore explores one of these issues by examining the difference in brain activity in adults and adolescents. The brain undergoes significant changes in Medial Prefrontal Activation during adolescence. This area is active in social cognition tasks. Research suggests that the development of social learning skills is still taking place late into adolescence and that continuing to learn and have real-life social interactions during this period is crucial for the development of the brain.
She concludes her exploration with more questions and an analysis of implications of this research for education. It is clear that some types of learning do require social interaction and that this is true even into late adolescence (and perhaps beyond?). For now, the question as to whether classes such as this one are as educationally valuable for the human brain is waiting on more research . For now, we get to be the judges of that.
Posted by      Megan M. at 5:36 PM MST
displaying most recent comments (1 ommitted) | Comments (4)
  xuziluwe zdenka  says:
Wanna surprise your loved ones and feel to have a great day with them then have the burger at the Burger King. When we hear the name burger we feel so enthusiastic to have it. https://mybkexperiences.info/mybkexperience/
Posted on Thu, 8 Nov 2018 1:04 AM MST by xuziluwe z.
  Anna Shetty  says:
I am glad to see this brilliant post. all the details are very helpful and useful for us, keep up to good work. https://cmgames.io/game/run-3
Posted on Tue, 26 Feb 2019 11:47 PM MST by Anna S.
  life time  says:
My writing skills are really questionable but I still managed to get A+ in all of my writing assignments. All thanks to livepaperhelp, the best writing expert service.
Posted on Fri, 9 Aug 2019 11:50 PM MDT by life t.

One gene to ruin them all: schizophrenia, bipolar, and DISC1


Discovered over a decade ago by scientists studying a Scottish family riddled with mental disorders, variations in the gene DISC1, or "discovered in schizophrenia-1", have been heavily linked to development of schizophrenia and, to a lesser extent, bipolar disorder. Only recently, however, have scientists begun unraveling the importance of the gene and its mechanistic functions. DISC1 encodes a scaffolding protein, a protein whose function is to help organize signaling pathways via formation of complexes with multiple other proteins in a way that allows them to interact. Experiments have shown the critical importance of DISC1 in a multitude of developmental functions, including neuronal migration, axonal and dendritic growth, and synaptogenesis, and neurogenesis, to name a few. Of the half-dozen or so DISC1-dependent signaling pathways discovered, a recent paper shows that disruptions in the Wnt pathway is implicated in a significant amount of the abnormal neurological features seen in persons with schizophrenia (and bipolar).

The Wnt pathway is critical for cell proliferation during development and plays key roles in embryogeneis, neuronal growth, and certain types of cancer. Activation of this pathway results in an increase of B-catenin, which associates with other proteins to upregulate transcription of certain genes necessary for proper neuronal development. Recent experiments have showed that schizophrenics and bipolar patients with genetic variations of the DISC1 gene show inhibited Wnt signaling and reduced neuroblastoma (N2a) cell proliferation. These studies also showed that the most common DISC1 variants had a decreased affinity for the protein GSK3-B, an intermediate in the Wnt pathway, to which it normally binds and inhibits. GSK3-B is responsible for the phosphorylation and subsequent degradation of B-catenin in the absence of Wnt signaling, and it is thought that in the absence of Wnt DISC1 is responsible for its inhibition. Since schizophrenic variations of DISC1 show a decreased affinity for GSK3-B, this may suggest a mechanism for the reduced signaling seen in many areas of the schizophrenic brain.

So why does any of this matter? Drug treatments. Lithium, the classic mood-stabilizing drug used to treat mania, was shown to inhibit GSK3-B just a little over a decade ago, and its mechanism of action is only now coming to light. Some antipsychotics have also been shown to indirectly stimulate the Wnt pathway, which might be a significant aspect of their pharmacological actions. With this knowledge in hand, selective inhibitors of GSK3-B (mimicking the actions of DISC1) or agonists for Frizzled, the Wnt receptor, could effectively attenuate many of the symptoms associated with schizophrenia and bipolar disorder. Along with brain imaging and genetic testing, disruptions in this pathway could also serve as an early marker for children predisposed for these mental disorders. Drugs such as those mentioned above could also prove useful in children with a high risk for developing schizophrenia or bipolar, in which an early drug regiment could blunt the onset of the disease and possibly prevent a lifelong struggle with a mental disorder. Although there are many other genes and pathways implicated in these cognitive disorders, the massive developmental impact of DISC1 variants and their effects on the Wnt pathway opens up promising new therapeutic opportunities for the treatment of schizophrenia and bipolar disorder.

Paper: "Common DISC1 Polymorphisms Disrupt Wnt/GSK3ő≤ Signaling and Brain Development", Singh et. al

http://www.sciencedirect.com/science/article/pii/S0896627311008841
Posted by      Kevin K. at 4:43 PM MST
  aidan mary  says:
feel regret I did not know this before I read your article, thank for sharing with us!
the impossible quiz
Posted on Tue, 27 Aug 2019 9:47 PM MDT by aidan m.

Alcoholism: Can it be Cured?


Alcoholism to this day is one of the most deadly and chronic diseases, that is interestingly controversial with regards to symptoms, treatment, diagnosis, and even heritability. How can a disease be so dangerous and historical and yet not even be remotely understood? As medicine, science, and technology move forward we are rapidly moving towards this inconceivable goal, but treatment is still only moderately successful along with progressive pharmacotherapies for all addictions.

A huge part of this lack of understanding of this disease is the fact that alcoholics are all different in a variety of ways; including their symptoms. There are numerous biological mechanisms as a result of alcohol addiction, all of them varying in their manners and withdraw. So recent studies show that these different mechanisms represent different stages of alcoholism, which could be relieved by different treatments. Regardless, these treatments have to block motivation to seek and consume alcohol.

Researchers have determined two categories: relief and reward drinkers. Reward drinkers drink to reward themselves the same way many drugs work, by activating brain reward pathways. Relief drinkers drink to relieve negative emotions, such as anxiety and feelings of withdrawal. Obviously these two varying types of alcoholics require different treatments.

It has also been discovered that alcoholism is marginally heritable. Genetic susceptibility is an alcoholic trait that can be passed down from generation to generation, however these varying types of alcoholism are largely based on environmental factors. These would include things such as how often the individual is exposed to stress or put in a circumstance of reward.

So now, is it possible to treat either one or both of these forms of alcoholism? Studies show that the reward system of alcoholism is mediated by a collaboration of endogenous opioids and dopamine release. Activation of dopamine in the mesolimbic pathway has been correlated to many other sorts of drug addictions. Dopamine is regulated in the corticomesolimbic system by a receptor known as MOR (mu-opioid receptor), which if blocked, prevents dopamine release caused by alcohol consumption. A drug known as naltrexone is an antagonist of opioid receptors, and is currently being researched as treatment for reward alcoholics.

Next is relief drinking. Relief drinkers drink to suppress stress, anxiety, discomfort, pain, and dysphoria. These alcoholics generally end up setting the stage for routine and frequent alcohol consumption to escape negative emotions. Recently, it has been discovered that release of CRF is central to this behavior. CRF (Corticotropin-releasing factor) is a peptide that is released into the anterior pituitary by alcohol consumption in relief drinkers, which in turn releases ACTH and stimulates cortisol release, reducing stress. CRF regulation and function is somewhat genetically determined, which makes a pharmacological cure more difficult and less likely to be successful. However, studies have shown that in individuals with naturally decent regulation of CRF could likely be treated for relief alcoholism; via CRF1 antagonism. Research is still ongoing as to whether this would be a sufficient method to treat alcoholics.

Alcoholism is a very complex disease, by which human understanding is challenged and pushed to therapeutic limits. This blog marks a tremendous step in the right direction towards understanding alcoholism and possibly curing the disease one day, but until that day comes there is plenty more to learn and to gain.

Heilig, Markus, David Goldman, Wade Berrettini, and Charles P. O'Brien. "Pharmacogenetic Approaches to the Treatment of Alcohol Addiction : Article : Nature Reviews Neuroscience." Nature Publishing Group : Science Journals, Jobs, and Information. 20 Oct. 2011. Web. 02 Dec. 2011. .

http://www.nature.com/nrn/journal/v12/n11/full/nrn3110.html
Posted by      Mark A. at 12:47 PM MST
  Joshua shua  says:
We all interested in understanding the values of alcoholism at neuro science. Many informative updates it has listed to write my assignment for me and entries. So, I am glad to understand neuro science assignments in online classes.
Posted on Mon, 27 May 2019 11:08 AM MDT by Joshua s.

December 1, 2011

Seeing With Your Mind.


When asked to imagine a particular object the mind seems to conjure the image instantly and you can observe it's many features with your mind's eye. Given that we all do this dozens of times a day it seems like a fairly boring and menial occurrence. However if you stop to think of the mental processes that underlay this phenomena you'll see how complex and important it is.

To first test the manner in which mental images are formed participants were shown an empty grid with a lower case letter beneath it. They were asked to imagine the corresponding uppercase letter in the grid as quickly as possible and state whether the letter would cover a particular block in the grid. As the letters became more complex (containing more segments) the response time increased. This finding led the researchers to believe that an image was not formed all at once, but rather as individual parts. (the letter F should take longer to recall because you must recall three parts as opposed to an L which contains only two parts). It also appears as if the parts were imaged in almost the same order in all cases. This is based off the fact that when the subjects were asked to draw individual letters they were imaged in the same sequence at least 75% of the time.

These findings were further supported by auxiliary evidence of the brain forming patterns. For example ------ is viewed as a straight line, not six dashes. Similarly XXX XXX is viewed as two groups not six X's as is XXXooo. Thus the brain is predisposed to organizing things as parts or "perceptual units." Given this it seems likely that the brain stores the letter F by its three individual parts rather than as a whole, and when the letter is recalled it is imaged a part at a time based off previously stored perceptual units.

Why is this the case? Why not simply remember something as a whole rather than by its parts? The answer comes via the limitations of the brain.

Previous research has shown that it is more difficult to hold onto a mental image while you are paying attention to actual visual stimulation. This would seem to say that some of the cortices involved in visual sensory input are implicated in mental imagery. To test why the things are organized by parts the researchers look at the processing of visual stimuli.

The two primary visual cortices are located in the inferior temporal lobe and the parietal lobe. Ablations in monkeys of the inferior temporal lobe causes the monkey difficulty in discriminating against patterns and shapes but have no difficulties in object location. The reverse is true in animals with ablations of the parietal lobe indicating that each lobe has a different functional relevance in visual processing. Thus the observation of "what" and "where" are processed separately.

This explains why we image things sequentially. The shape of a part is stored separately from it's location relative to other parts. For example an F is composed of vertical bar connected on top and in the middle to two horizontal bars. Given that 94% of the participants drew an F by drawing the vertical line first then top and bottom shows that there are parts prerequisite to other parts and thus must be imaged one part at a time.

This division of objects into parts has great importance. It is why we are able to recognize letters in different fonts. Rather than memorizing how an F looks in every form and being confused upon seeing a new font we merely have to know the parts of an F and how they relate to each other. This can be further extrapolated out away from the simplicity of letters. A person can assume many forms. We can stand, sit, curl in the fetal position and still we are able to recognize it as a person because we recognize the parts and how they are connected to one another, as opposed to knowing exactly how a person looks when they are curled.

This division relieves us of having to know much more specific information thus freeing up brain power so we can say, know how to write a blog.

Aspects of a Cognitive Neuroscience of Mental Imagery. Kosslyn et al. Science Journal.
http://wjh-www.harvard.edu/~kwn/Kosslyn_pdfs/1988Kosslyn_Science240_Aspects.pdf
Posted by      Zach I. at 4:22 PM MST
displaying most recent comments (1 ommitted) | Comments (4)
  Zach Irell  says:
Yeah I was wondering the same thing...they don't mention it at all in the paper most likely because they don't have a good answer. I think that there is definitely merit in that answer but from an evolutionary stand point it makes more sense for us to encode things in the way presented by these researchers. The letters are a very simplistic way for them to explain this and thats why we can call it into question but I think the results and direction of the paper point to things on a much larger and more complicated scale.
Posted on Sun, 4 Dec 2011 4:51 PM MST by Zach I.
  Christina Uhlir  says:
Did the paper get into templates and expectations? By templates I'm referring to mental representations of objects, people, scenes that act as a prototype that influences a person's perceptual experience (e.g. drawing from memory a letter and comparing it to the one you see in front of you) and based on that, the expectation a person has about the stimulus, and problematic interactions that could result.
Posted on Sun, 4 Dec 2011 5:04 PM MST by Christina U.
  Zach Irell  says:
This paper refers to how mental representation of objects, people and scenes are stored and how we recall them.
Posted on Thu, 8 Dec 2011 8:42 PM MST by Zach I.

Of Mice and Men


Exploitation and the misuse of information is something that the media coincides with inexplicably. It should be no surprise that there is a great disconnect between scientific organizations and the general public for these very reasons. One organization in particular is trying to bridge this widening gap in order to prepare our society with implications of certain scientific methods in the future as our technological advances continue to advance. The UK's newly formed Academy of Medical Science is working to discuss the "scientific, ethical, and regulatory ramifications" of working with ACHM. ACHM's are animals containing human material. These animals are a result of scientists adding a small number of human genes into mice. This organization is working in order to create a set of limitations and rules for what types of experiments can be done in the future using this type of animal model. Although none of the procedures done so far reached outside of these limitations, they still felt it was important to lay the ground rules going into the future. They wanted these rules to be a reflection of the publics needs as well as the needs of the medical implications these studies lead to.

Perhaps more imperative to present day, is the other function of the group. The Academy of Medical Science feels that it is also necessary to openly discuss these processes and regulations with the public to stop the bad publicity that these ACHM models are creating. Due to constant speculation, the media and politicians are misinforming the public about what is really going on. Generally, scientists can be hesitant to go public with their procedures because it can often be misinterpreted. The creation of a negative perception can hurt the funding for these projects that really have good intentions that the public just cannot see.

The public seems rather obsessed with the idea of the 'mad scientists' who create animal-man hybrids in their laboratories just because they want to, and because they can. It is widely thought that these ACHM models are a used to create animal hybrids, and that stem cell research is done in order to create a cloned human race. While Hollywood may further push this idea from seemingly scientific movies and TV shows, people can interpret them as being based off of real evidence. In fact, these very viewpoints are the reason why this organization wants to openly discuss the benefits, as well as the setbacks, of performing such studies. They want to address not only the emotional and ethical rational behind their experiments, but also would like to argue the medical reasoning and justifications.

The article used the example of US Senate Candidate Christine O'Donnell speech against human-animal hybrids to show how misinformed or misjudged information can be misleading when it is not fully understood. O'Donnell was quoted saying that "scientists were cross breeding humans and animals". She further said that this led to functioning human brains within the mice. While there is obviously not factual evidence to support her claim, this publically stated accusation led to an increasingly negative viewpoint from animal rights activists and anti-genetic engineering supporters.

The main reason this type of animal model is used is to study different aspects of varying diseases in specific biological situations. They are not creating mice with the exact replica of the human form of the disease, and are really only altering a few genes, if that. Eventually there will technological advances that will allow new and improved studies to be done. It is very important that they let the public know now, ahead of time, what exactly they are planning to study and learn from present and future experiments. This will not only increase funding (because there will be more understanding and support towards their studies), but it will also reduce the bad publicity that emerging scientific field has to deal with.

http://www.nature.com/neuro/journal/v14/n12/full/nn1211-1489.html
Posted by      Amber S. at 1:47 PM MST
  Christina Uhlir  says:
Do you read the Wall Street Journal? There was a piece about "Citizen Scientists" that gets to your point about the misuse of information.

If not, here is the article:

http://online.wsj.com/article/SB10001424052970204621904577014330551132036.html
Posted on Sun, 4 Dec 2011 4:39 PM MST by Christina U.
  Amber Spence  says:
I have not, sounds interesting! I'll check it out, thanks!
Posted on Sun, 4 Dec 2011 10:37 PM MST by Amber S.

Neocortex: Why We are Better


The neocortex is the outer layer of the cerebral hemispheres, where in humans, is believed to be involved in higher functions such as language, conscious thought, sensory perceptions, etc. There has been a high volume of interest and debate among developmental neurobiologists regarding the molecular mechanisms/molecules involved in differentiation and development of the neocortex. But to start with, the question is, why specifically this region of the brain?

The neocortex is not known to be present throughout the animal kingdom, but is presumed to be specific to mammals. For example, we, humans, are able to perform particular functions and have a higher order of thinking due to the mechanisms/processes of the neocortex. Therefore, the unknown entities and development of the neocortex is a highly talked about subject within this field because it can help explain the evolution of human behavior and the known generates high interest as many researchers seek answers about the development of mankind.

Therefore, within the past 20 years, there has been significant progress in identifying certain patterning of the neocortex through state-of-the-art molecular approaches, however, we are still very far from knowing the complete truth. For example, we now know that the anterior-posterior orientation has a genetic impact that if altered, can lead to diseases such as smooth brain syndrome. In this case it is the alterations of the concentration gradients of two molecules, DCX anteriorly and LIS1 posteriorly. Additionally, there is now pertinent evidence of genetic patterning that is symmetric between the two hemispheres, that if altered, can also lead to severe phenotypes. Another concept at large debate is whether the differentiation and development of the neocortex is more dependent on intrinsic or extrinsic mechanisms, in which informs our understanding on the developmental plasticity phenomena, critiquing the importance of critical and sensitive periods. In other words, how much is the regionalization of areas in the neocortex affected by varying the levels of essential transcription factors?

Besides the uses of molecular techniques, Chen and his colleagues are investigating these questions through a combination of twin studies and structural MRI to demonstrate the relative contributions of genetic and environmental factors in regionalization of the cerebral cortex. Therefore, they have concluded that although "genetic factors may have a boisterous influence in the establishment of regionalization of the cortex, functional areas do not seem to be influenced by the same factors," implementing that those environmental factors can also play a significant role in neuro-development.

With the newly employed combination of these analytical studies and ultimate hype on the development of the neocortex, this marks an exciting new chapter on the study of human brain development, where we can hopefully determine the genetic and environmental factors that determine the higher order functions of human/mammalian brain. Furthermore, with this marked data and more advanced research on the human brain development to come, we may one day find more genetic or environmental patterning of the brain that can lead to cures of diseases that are defined as incurable today.

Schlagger, B. "Mapping Genetic Influences On Cortical Regionalization." Neuron. Volume 74, Issue 4. Pg. 499-501. 17 November 2011.

http://www.sciencedirect.com/science/article/pii/S0896627311009597

Sarah Ha
Posted by      Sarah H. at 11:01 AM MST
  Christina Uhlir  says:
Do you know how new this field of study is?
Posted on Sun, 4 Dec 2011 4:09 PM MST by Christina U.




 Copyright © 2007-2016 Don Cooper, Ph.D.. All rights reserved.