. . .

The other day I posted a brief essay (of sorts) that continued my investigation into this notion that many of us have about our ‘self’… For over five years now I have been fortunate (or, some may say, unfortunate) enough to stumble upon many ‘seemingly’ unusual and/or socially counter-intuitive views to many Westernised ways or styles of thinking about things… These alternatives, being anything but wrong, from my perspective, have pushed my boat out way beyond what much of Western psychology and philosophy has ‘reasonably’ presumed about the universe in which we live… As well as how we, as sentient beings, relate to it. These ideas have – to say the least – drastically challenged my own personalised philosophies and ideas about what reality might actually be, as well as how I choose to live my life… Not to mention they have changed the way I think about nearly everything I thought I knew anything about i.e. social etiquette, certain scientific knowledge, logical reasoning, etc… doing so to the point that most of the certainties that I had stubbornly held on to over the years have now shown themselves to be – on the whole – nothing more than delusions that are about just as uncertain and biased towards their (or even my) own ends as Russell’s and Whitehead’s “Principia Mathematica” might have been theirs when set aside Gödel’s “Incompleteness Theorems”.

Be it known… It has certainly never been my intention to undermine any of our Westernised ways of thinking, or any of our socially acceptable habits of being and/or notions of perceiving the world around us. Rather, my aim has always been to challenge any dogmatic certainties that we might have held cradled a bit to close to our psyches (much like Linus’ security/comfort blanket in Charles M. Schulz’s Peanuts comic strips) and/or any overly cherished ‘certainties‘ that we may harbour in our ever-changing mind-streams while going about our busy daily lives on the surface of this planet… A jewel of a planet that ‘floats’ – almost miraculously – in an inky black void amidst a cornucopia of never ending universal changes (stars and galaxies being born and then die). Certainly the universe around us never rests for one second. It always resides in a continual state of unending change. Nothing… And I mean nothing, ever remains the same for very long, let alone for ever. So why should we hold on to any certainties… ? Or live clinging to securities that one day the universe will snatch away from us?

Within this state of perpetual change there lies the natural ebb and flow of chaotic patterns that intermingle, interrelate and feedback upon each other, allowing more complex systems to evolve and/or arise within the non-linear tapestry of atomic inter-reactivity, instability and the resulting conjoined possibilities. These biological frames of living matter (that we call our bodies) are a testament to this natural arising of life and, as such, I have searched both high and low to formulate a clearer sort of reasoning/understanding (at least for my ‘self‘) so as to better understand/perceive the natural order of things (regardless of what the generalised consensus might be), as well as to be able to better to relate to this experience of being a so-called ‘living’ entity.

I am humbled to say that, during this search, I have found many other philosophies and understandings that closely relate to my own, all with minor variations that procure a sort of diversity and, yet, still point towards a sort of perennial philosophy. From these various ‘schools of thought’ I have learnt many pertinent things, as well as been afforded a chance to develop and attune further my own understanding and attitude toward life. As Douglas Hofstadter pointed out in his cryptic lecture “Analogy As The Core Of Cognition“, I continually found my ‘self’ observing a type of affine universal self-similarity between these various ways of thinking… Something that kept reminding me of what some have kept calling “God’s Thumb Print“… Which has allowed me to see a part of the infinite whole and realise that it is all interrelated and interconnected to everything… And it was this interrelatedness that eventually brought me into contact with some highly perceptive and well developed philosophies concerning the natural order of things, the mind and how we perceive things, as seen in “Taoism” and “Buddhism”.

For me, Buddhism has been the most fascinating of all the philosophies that I have learnt about. It’s central doctrines all highlight the most important – and sometimes much overlooked – aspects of living i.e. everything changes – nothing stays the same (impermanence), everything is interconnected to everything else – we are interdependent to everything else (interdependence), mind is all pervasive – our states of mind have a very powerful effect on the way in which we perceive the world around us i.e. the power of mind can do some very ‘supernatural’ things, like changing the shape of the brain, affecting the subtle energy channels within the body to produce highly unusual results and, not least, Thích Quảng Đức, who was the Vietnamese Mahayana Buddhist monk who burned himself to death without any display of pain or suffering at a busy Saigon road intersection on 11 June 1963 in protest to the persecution of Buddhists by South Vietnam’s Roman Catholic government… As well as developing awareness, especially to our states of mind as they arise and subside, which is the key to finding a balanced and holistic way of living, one that propagates the most well being for all and one’s ‘self’.

As such, I still come back to Buddhism everyday to find new (though they are, in actual fact, near on 2,600 years old) and highly relevant teachings (and/or parallels) about how to understand and relate to this experience of living for positive effect. Many of the Buddhist philosophies that I have learnt about very much mirror some of the scientific philosophies that have recently surface (or have been re-discovered) and, as such, I find a great source of wisdom and inspiration within its bountiful depths. As a sort of testament to Buddhism’s universal usefulness there seems to be a sort of general acceptance within the NHS that Buddhist techniques can actually help people, especially when dealing with much of the anxiety and depression we find in the modern world. This can be clearly seen by the fact that the NHS – here in the UK – now offers mindfulness training, which really seems to helping people cultivate and develop better awareness in their lives, surroundings and ways of being… But, despite this adoption of Buddhist practises by the UK’s health service, a lot of the most important parts to mindfulness training seem to have been skirted over and simply ignored…

Why is this? Well… For starters, many of the eminent Masters who have practised meditation and mindfulness for many “lifetimes” (reincarnation being a subject that I will broach in a coming post) just don’t seem to be included in the scientific equation… No doubt some already are being included, but many are not… Though more importantly, the NHS are not contacting those who are properly educated in mindfulness to seek their advise on how best to implement a course that teaches mindfulness. Perhaps the ‘solid’ scientific background that seems to confidently back-up modern medicine with facts and figures just doesn’t hold the Karmapa or Dali Lama in high regard as contemporaries who were formally trained in their own self-accredited universities of reason and knowledge and, thus, lack the relevant degrees to substantiate passing on their knowledge and wisdom to those in modern Westernised academia… ? Or maybe the deliberate shrouding of many Buddhist practises by the monks and Lamas themselves only adds to the stigma of religious mysticism that already surrounds Buddhism here in the West… ? Either way, the only way to dispel this somewhat ignorant (maybe even arrogant) outlook that the West has about Buddhism is to mention that, what many people fail to understand is, both the Karmapa and Dali Lama have trained harder and longer in these ancient techniques of mindfulness and awareness than any graduate or PhD would or could have done in their respective fields over the course of their lives, making them by far the foremost teachers in their unique disciplines of mindfulness and awareness training. Neither is Buddhism a religion in the traditional sense… Rather, I would say that it is a highly developed philosophy and science of mind, one that has been crafted from years of practise, whereby each exponent has experimented with many techniques until those that work (in developing mindfulness) are recorded and practised diligently by further lineages, all that directly stem back to the Gautama Buddha.

Until this is clearly grasped by many of us, for me, the NHS beginning to train people in mindfulness without proper guidance is a bit like a novice (who has no formal training in the subject) teaching student something that they are not really qualified to teach. Imagine someone – who has no formal training in science whatsoever – however they note that quantum physics actually shows us a lot about the way in which the world works around us (on a mechanical level) – and, yet, then goes on to ignore most of the relevant details behind it, only using snippets of information that seem to suite their own ends i.e. like showing the Double-Slit Experiment and then stating to a student that this clearly demonstrates the fundamentally probabilistic nature of quantum mechanical phenomena, and then awarding them a degree. Okay… But what happened to the rest of the data that those researching and studying quantum physical events in proper academia have discovered over many years of research, all of which helps the student develop a deeply penetrating idea that leads to a more coherent and complete picture of the whole of quantum mechanics, so that they can continue the complicated and arduous research at the cutting edge of discovery to help as many others as is possible? For sure, people have to start somewhere… But I strongly feel that they should start as they mean to go on i.e. learn from the people who know what they are talking about.

On a less critical note… At least the NHS is beginning to realise that the mind is a powerful tool that can help heal itself without the need for medical or pharmacological intervention most of the time. Perhaps this will be akin to modern medicine taking the first steps in a philosophy where the patient might well be better and more equipped to treat themselves rather than a doctor (in many instances), especially if given the right teachings and practises to perform… ?

As part of this lifetime journey with Buddhism, I will continue to write entries in this website about what I find and discover along the way. Certainly there is no other aim to this practise other that to arrange and present my thoughts to another who might be interested in reading about what I have to say. As such, I must stress that, while I do my best to make sure that the information provided within these pages is as correct and accurate (from my own perspective) as can be, I am nonetheless a novice. And, so, I would never use anything that I have written here as fact without checking it out for yourself and finding what you really think and feel about it first. Most who have been following the entries in this website for sometime already know my wariness of anything procuring ultimate fact or certainty. As Lord Byron was once noted to have said, “If I be a fool, it is, at least, that I be a doubting one; for I envy no one the certainty of his self-approved wisdom…” And as Einstein once said about mathematics, “As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality.” As such, I find myself resigned to a continual modification of what I think I know, turning it continually around so as to be able to view it from every possible angle in order to see whether there are any gaps in it… And, as I trawl through the mountains of research that comes my way, I find snippets that offer a ‘possible’ insight, or fit, for some of those gaps… Still, there will always be gaps… Just like with the length of Britain’s coastline being dependent on its length, so too will there be gaps in our understanding that will always somehow leave the puzzle of consciousness, at least for myself, never quite answered entirely… Slightly clearer it may seem than what most of us originally started with… But never complete. Only direct experience will bring about completion.

So, until total, completely pure, immersive and direct experience is achieved, one that can transcend dualistic thought altogether… I continue with my conceptualised trains of thought and make the following offering that might shed a tiny bit of light on how and why the notion of a “self” could come about, one that perhaps evolved (and was naturally selected for) over time in the cellular infrastructure of our brains.

Just the other month, as I was thinking about some of the other unusual aspects about the ‘self’ (of which I will write more about in future posts), while painting the BIG green doors outside, I came across the following New Scientist article that was stuck to the bottom of my paint can, covered in gently arcing streaks of sticky green paint. It was the word “consciousness” that caught my eye… So, prising it gently from the base of the tin, the article’s front page slowly began to reveal itself. Once it was free from the can’s underside, most of the article was still obscured by vibrant rounded strips of summery Buckingham Green, most of which obscured enough of the article to make it unreadable. Thus I took it to the kitchen table and gently wiped it clean with a spirit soaked rag. As the thick streaks of paint slowly spread across the page, covering some of the clean text, the whole became more legible… The green was becoming so thinned that anything printed underneath could now be clearly seen. Once I could read most of the text, I set it aside in the bright heat of the sun and left its wet, soft pulp to dry into a manageable form as I painted another coat of green onto the old barn’s doors.

Not too long after finishing the last over coat, the page was ready to finger… And so I set about to my usual morning ritual of having a cup of tea in the cool morning breeze while taking cover under the waning shade of the granary’s hulking form, as I set about reading the somewhat shabby pea green, but now legible, article that had been rescued from certain doom… And this is what I read…

. . .

Are These The Brain Cells That Give Us Consciousness?

The brainiest creatures share a secret – an odd kind of brain cell involved in emotions and empathy that may have accidentally made us conscious

THE origin of consciousness has to be one of the biggest mysteries of all time, occupying philosophers and scientists for generations. So it is strange to think that a little-known neuroscientist called Constantin von Economo might have unearthed an important clue nearly 90 years ago.

When he peered down the lens of his microscope in 1926, von Economo saw a handful of brain cells that were long, spindly and much larger than those around them. In fact, they looked so out of place that at first he thought they were a sign of some kind of disease. But the more brains he looked at, the more of these peculiar cells he found – and always in the same two small areas that evolved to process smells and flavours.

Von Economo briefly pondered what these “rod and corkscrew cells”, as he called them, might be doing, but without the technology to delve much deeper he soon moved on to more promising lines of enquiry.

Little more was said about these neurons until nearly 80 years later when, Esther Nimchinsky and Patrick Hof at Mount Sinai University in New York also stumbled across clusters of these strange-looking neurons. Now, after more than a decade of functional imaging and post-mortem studies, we are beginning to piece together their story. Certain lines of evidence hint that they may help build the rich inner life we call consciousness, including emotions, our sense of self, empathy and our ability to navigate social relationships.

Many other big-brained, social animals also seem to share these cells, in the same spots as the human brain. A greater understanding of the way these paths converged could therefore tell us much about the evolution of the mind.

Admittedly, to the untrained eye these giant brain cells, now known as von Economo neurons (VENs), don’t look particularly exciting. But to a neuroscientist they stand out like a sore thumb. For one thing, VENs are at least 50 per cent, and sometimes up to 200 per cent, larger than typical human neurons. And while most neurons have a pyramid-shaped body with a finely branched tree of connections called dendrites at each end of the cell, VENs have a longer, spindly cell body with a single projection at each end with very few branches (see diagram below). Perhaps they escaped attention for so long because they are so rare, making up just 1 per cent of the neurons in the two small areas of the human brain: the anterior cingulate cortex (ACC) and the fronto-insular (FI) cortex.

Their location in those regions suggests that VENs may be a central part of our mental machinery, since the ACC and FI are heavily involved in many of the more advanced aspects of our inner lives. Both areas kick into action when we see socially relevant cues, be it a frowning face, a grimace of pain or simply the voice of someone we love. When a mother hears a baby crying, both regions respond strongly. They also light up when we experience emotions such as love, lust, anger and grief. For John Allman, a neuroanatomist at the California Institute of Technology in Pasadena, this adds up to a kind of “social monitoring network” that keeps track of social cues and allows us to alter our behaviour accordingly (Annals of the New York Academy of Sciences, vol 1225, p 59).

The two brain areas also seem to play a key role in the “salience” network, which keeps a subconscious tally of what is going on around us and directs our attention to the most pressing events, as well as monitoring sensations from the body to detect any changes (Brain Structure and FunctionDOI: 10.1007/s00429-012-0382-9).

What’s more, both regions are active when a person recognises their reflection in the mirror, suggesting that these parts of the brain underlie our sense of self – a key component of consciousness. “It is the sense of self at every possible level – so the sense of identity, this is me, and the sense of identity of others and how you understand others. That goes to the concept of empathy and theory of mind,” says Hof.

To Bud Craig, a neuroanatomist at Barrow Neurological Institute in Phoenix, Arizona, it all amounts to a continually updated sense of “how I feel now”: the ACC and FI take inputs from the body and tie them together with social cues, thoughts and emotions to quickly and efficiently alter our behaviour (Nature Reviews Neuroscience, vol 10, p 59).

This constantly shifting picture of how we feel may contribute to the way we perceive the passage of time. When something emotionally important is happening, Craig proposes, there is more to process, and because of this time seems to speed up. Conversely, when less is going on we update our view of the world less frequently, so time seems to pass more slowly.

VENs are probably important in all this, though we can only infer their role through circumstantial evidence. That’s because locating these cells, and then measuring their activity in a living brain hasn’t yet been possible. But their unusual appearance is a signal that they probably aren’t just sitting there doing nothing. “They stand out anatomically,” says Allman, “And a general proposition is that anything that’s so distinctive looking must have a distinct function.”

Fast thinking

In the brain, big usually means fast, so Allman suggests that VENs could be acting as a fast relay system – a kind of social superhighway – which allows the gist of the situation to move quickly through the brain, enabling us to react intuitively on the hop, a crucial survival skill in a social species like ours. “That’s what all of civilisation is based on: our ability to communicate socially, efficiently,” adds Craig.

A particularly distressing form of dementia that can strike people as early as their 30s supports this idea. People who develop fronto-temporal dementia lose large numbers of VENs in the ACC and FI early in the disease, when the main symptom is a complete loss of social awareness, empathy and self-control. “They don’t have normal empathic responses to situations that would normally make you disgusted or sad,” says Hof. “You can show them horrible pictures of an accident and they just don’t blink. They will say ‘oh, yes, it’s an accident’.”

Post-mortem examinations of the brains of people with autism also bolster the idea that VENs lie at the heart of our emotions and empathy. According to one recent study, people with autism may fall into two groups: some have too few VENs, perhaps meaning that they don’t have the necessary wiring to process social cues, while others have far too many (Acta Neuropathologica, vol 118, p 673). The latter group would seem to fit with one recent theory of autism, which proposes that the symptoms may arise from an over-wiring of the brain. Perhaps having too many VENs makes emotional systems fire too intensely, causing people with autism to feel overwhelmed, as many say they do.

Another recent study found that people with schizophrenia who committed suicide had significantly more VENs in their ACC than schizophrenics who died of other causes. The researchers suggest that the over-abundance of VENs might create an overactive emotional system that leaves them prone to negative self-assessment and feelings of guilt and hopelessness (PLoS One, vol 6, p e20936).

VENs in other animals provide some clues, too. When these neurons were first identified, there was the glimmer of hope that we might have found one of the key evolutionary changes, unique to humankind, that could explain our social intelligence. But the earliest studies put paid to that kind of thinking, when VENs turned up in chimpanzees and gorillas. In recent years, they have also been found in elephants and some whales and dolphins.

Like us, many of these species live in big social groups and show signs of the same kind of advanced behaviour associated with VENs in people. Elephants, for instance, display something that looks a lot like empathy: they work together to help injured, lost or trapped elephants, for example. They even seem to show signs of grief at elephant “graveyards” (Biology Letters, vol 2, p 26). What’s more, many of these species can recognise themselves in the mirror, which is usually taken as a rudimentary measure of consciousness. When researchers daub paint on an elephant’s face, for instance, it will notice the mark in the mirror and try to feel the spot with its trunk. This has led Allman and others to speculate that von Economo neurons might be a vital adaptation in large brains for keeping track of social situations – and that the sense of self may be a consequence of this ability.

Yet VENs also crop up in manatees, hippos and giraffes – not renowned for their busy social lives. The cells have also been spotted in macaques, which don’t reliably pass the mirror test, although they are social animals. Although this seems to put a major spanner in the works for those who claim that the cells are crucial for advanced cognition, it could also be that these creatures are showing the precursors of the finely tuned cells found in highly social species. “I think that there are homologues of VENs in all mammals,” says Allman. “That’s not to say they’re shaped the same way but they are located in an analogous bit of cortex and they are expressing the same genes.”

It would make sense, after all, that whales and primates might both have recycled, and refined, older machinery present in a common ancestor rather than independently evolving the same mechanism. Much more research is needed, however, to work out the anatomical differences and the functions of these cells in the different animals.

That work might even help us understand how these neurons evolved in the first place. Allman already has some ideas about where they came from. Our VENs reside in a region of the brain that evolved to integrate taste and smell, so he suggests that many of the traits now associated with the FI evolved from the simple act of deciding whether food is good to eat or likely to make your ill. When reaching that decision, he says, the quicker the “gut” reaction kicks in the better. And if you can detect this process in others, so much the better.

“One of the important functions that seems to reside in the FI has to do with empathy,” he says. “My take on this is that empathy arose in the context of shared food – it’s very important to observe if members of your social group are becoming ill as a result of eating something.” The basic feeding circuity, including the rudimentary VENs, may then have been co-opted by some species to work in other situations that involve a decision, like working out if a person is trustworthy or to be avoided. “So when we have a feeling, whether it be about a foodstuff or situation or another person, I think that engages the circuitry in the fronto-insular cortex and the VENS are one of the outputs of that circuitry,” says Allman.

Allman’s genetics work suggests he may be on to something. His team found that VENs in one part of the FI are expressing the genes for hormones that regulate appetite. There are also a lot of studies showing links between smell and taste and the feelings of strong emotions. Our physical reaction to something we find morally disgusting, for example, is more or less identical to our reaction to a bitter taste, suggesting they may share common brain wiring (Science, vol 323, p 1222). Other work has shown that judging a morally questionable act, such as theft, while smelling something disgusting leads to harsher moral judgements (Personality and Social Psychology Bulletin, vol 34, p 1096). What’s more, Allman points out that our language is loaded with analogies – we might find an experience “delicious”, say, or a person “nauseating”. This is no accident, he says.

Red herring

However, it is only in highly social animals that VENs live exclusively in the scent and taste regions. In the others, like giraffes and hippos, VENs seem to be sprinkled all over the brain. Allman, however, points out that these findings may be a red herring, since without understanding the genes they express, or their function, we can’t even be sure how closely these cells relate to human VENs. They may even be a different kind of cell that just looks similar.

Based on the evidence so far, however, Hof thinks that the ancestral VENs would have been more widespread, as seen in the hippo brain, and that over the course of evolution they then migrated to the ACC and FI in some animals, but not others – though he admits to having no idea why that might be. He suspects the pressures that shaped the primate brain may have been very different to those that drove the evolution of whales and dolphins.

Craig has hit upon one possibility that would seem to fit all of these big-brained animals. He points out that the bigger the brain, the more energy it takes to run, so it is crucial that it operates as efficiently as possible. A system that continually monitors the environment and the people or animals in it would therefore be an asset, allowing you to adapt quickly to a situation to save as much energy as possible. “Evolution produced an energy calculation system that incorporated not just the sensory inputs from the body but the sensory inputs from the brain,” Craig says. And the fact that we are constantly updating this picture of “how I feel now” has an interesting and very useful by-product: we have a concept that there is an “I” to do the feeling. “Evolution produced a very efficient moment-by-moment calculation of energy utilisation and that had an epiphenomenon, a by-product that provided a subjective representation of my feelings.”

If he’s right – and there is a long way to go before we can be sure – it raises a very humbling possibility: that far from being the pinnacle of brain evolution, consciousness might have been a big, and very successful accident.

This article has been edited since it was first posted

Caroline Williams is a writer based in Surrey, UK

. . .

To find out where I sourced the New Scientist article from, please click here.

OR to learn a bit more about the author, please visit her LinkedIn profile page by clicking here.

. . .

Be it known… This entry was written as a complement to one that was posted earlier last week, entitled “Beyond Environment: Falling Back In Love With Mother Earth.”

Someone once said, “The trouble with weather forecasting is that it’s right too often for us to ignore it and wrong too often for us to rely on it.” But ever since I read this, I’ve been looking at my local weather forecasts everyday now for nearly two years straight… And I’ve got to say, looking one day ahead, the MET Office seem to get it near on 95% right every time… Seriously, you don’t have to take my word on this. Just check it out for yourselves. Saying, I noticed that when the MET Office begin to make general forecasts that are five days in to the future, their accuracy falls quite substantially. On the whole – while I haven’t been taking as much notice of these 5 day forecasts – I’d say they tend to get them near on 60% right. Now that, in my books, is definitely good predicting. How do they do it? Well, they’ve been using some of the world’s biggest and best super computers to crunch all the raw numerical data that is gathered from a vast array of sources (both manual and remote sensing data posts), of which they literately have thousands: on military air fields, to all the way out at sea. The gathering of this diverse spread of data gives them a really unique (and very accurate) perspective on weather patterns here in the UK, demonstrating how temperature, wind, sun, rain, cloud and other meteorological phenomena all feedback into each other to create the daily weather patterns that we observe in our daily lives.

But is it really a clear cut and easy to understand science when trying to understand how these individual phenomena affect each other? As some of you may already know, Edward Lorenz pretty much made a big discovery back in 1961 when looking at weather systems while studying computer simulations. He basically noticed the unexpected unfolding of a weather simulation as the result of a shortcut that he took by entering data to only three decimal places rather than six. As a result, this sensitivity to initial conditions was something that has been well studied over the last 50 years, being called Chaos Theory. The MET know a lot about how non-linear dynamics operate within weather systems here on Earth… In fact they’re presently doing a lot research into the sensitivity of the Earth’s weather system and how human activities affect it. If you ask me whether mankind is seriously affecting the environment in which he lives through his activities… I’d tell you a very big, “YES!” Just as Lorenz saw huge unexpected variances in his computer simulations, one’s that occurred from simply varying miniscule amounts in the data that was being entered at the beginning, so too will mankind see even bigger changes in the weather systems that we expect to see here on Earth. It’s not joke… Mankind isn’t varying the environment by minuscule amounts anymore, as we might have done 2000 years ago… We’re slashing the environment to pieces by whacking great asphalt cities down all over the Earth, by burning 400 years worth of energy stored by plants from the sun, by building dams to regulate the earth’s natural water flow, by turning ancient forests into agricultural land, etc…

On the whole I try to be as optimistic as possible through my general outlook on everything we as human being do. Saying that, I was never the type of person who would adorn an overly positive outlook about something just because being positive would make the situation better. To me, that’s a bit like thinking that you can fly and then throwing yourself off the top of a building, expecting to be able navigate the air currents safely back down to earth. Not my style. If you want to be that positive, then try taking off from the ground first. At least then you’ll know whether or not your positivity and belief in your ability is justified. So, that’s it. I suppose I’d rather get my facts straight and look at whatever situation I was in from as open minded a view as possible, regardless of what it was going to elucidate. I mean, I can play a bit of guitar and some very basic piano, but can’t read music off a staved sheet very well at all. So perhaps I wouldn’t remain positive about the fact that I could proficiently play Debussy’s Arabesque #1 after only one week of solitary practice with nothing more than a musical score to guide me… But I could perhaps muster a half decent attempt after one week of tuition with a good teacher and with access to a audible version of the music too.

And that’s my point here… There are different variables within certain parameters of any given situation that, when viewed by an observer, define whether or not one could feel positive about obtaining a particular outcome for that given situation. If some of the most obvious parameters for success are not present, masked over by a general optimistic view that things will work out, so whey bother trying too hard… After which one still feels exceedingly positive about obtaining a result… Well, my common sense would either tell me to lower my positive outlook about the outcome of events, or pull my finger out and get on with working out a way to succeed.

Nina Fedoroff was recently quoted saying, “We are sliding back into a dark era, and there seems little we can do about it.” During a conference last week, the president of the American Association for the Advancement of Science (AAAS) confessed that she was “scared to death” by the anti-science movement that was spreading, uncontrolled, across the US and the rest of the western world. While I feel that this statement might be a little too strong for my own stance on this general “head in the sand” tactic, I do empathise with Fedoroff because her natural survival instinct – the one that watches a friend get eaten by a tiger, so that when she’s sees another tiger she runs, rather than stroking the rather large and cuddly cat – is obviously telling her that a lot of folk out there do not share her concern for where we, as a civilization on a planet, are heading. Many have no real desire to understand too much about what sustainability actually is, let alone steer their lives into modes of minimizing capitalist consumption by growing their own food, managing their own woodlands for fire wood, insulating their homes, giving up their cars, etc…

For me, this is a bit like the case of thinking one can fly and going straight to the top floor of the Empire State building and launching themselves off the top parapet. Yes, they might think that they’re flying as they SWOOSH past floor after floor, hurtling towards the solid asphalt below at breakneck speed. But is it really flying? I mean, can they sustain the period of time that they’re in the air for without the sudden SPLAT at the end? I mean… Can we sustain even half the number of human beings at our present rates of consumption? Can we sustain this huge spurt of uncontrolled growth that mankind is witnessing in the 21st century? In fact… Just how many people do you think the earth can support?

You see… When one of the world’s most distinguished agricultural scientists tells me that she’s “scared to death” by what she sees going on around her… Doing so at one of the most well known annual scientific meetings. Well… My commonsense tells me to at least oblige this lady and have a listen to what she has to say. “We are sliding back into a dark era,” Fedoroff said. “And there seems little we can do about it. I am profoundly depressed at just how difficult it has become merely to get a realistic conversation started on issues such as climate change or genetically modified organisms.”

Would you believe… Just like the MET Office studies weather patterns in order to forecast the coming day’s weather, so to are there people looking at today’s and yesterday’s global patterns of human growth and resource consumption, who are making predictions about what the future might hold for us. And I’ve got to say, while these studies might not be as detailed or as developed as some of the weather studies that the MET Office are conducting… The ones that have come to light certainly show us something that we should be heeding.

Like I said… I’m quite an optimistic person. But I still read and/or listen to the weather forecasts every morning… And if there’s a chance that it’ll be a rainy, cold, wet and windy day, I won’t remain optimistic that the weather might suddenly change to something better and wear nothing by my shorts and a T-shirt. I mean, I already know from observation how accurate the MET Office’s weather forecasting can be… For them to be able to make these predictions, the observations come from which their science was built from, along with their forecasts, must be quite accurate and sound. I mean 95% accuracy for one day ahead is near on great. Thus I base my actions for the coming day on this forecast, like whether I should take umbrella with me, or wear shorts and sunglasses, etc… The way the professionals do their stuff down at the MET Office instils in me an air of confidence about what they do and how they do it… So I listen to them when the advise us on the weather.

Likewise, having studied a scientific discipline myself at university, one that looked at methods for detecting illnesses within the human body, I know that there is great accuracy in these methods. They are used time and again to catch people with cancer, bacterial infections, etc… And they do so with near on 85% accuracy. So, on the whole, I have great respect for the discipline of scientific study… And I have a great respect for many of those involved in the areas of science. Don’t get me wrong… We’re not perfect. Just like the MET Office only get 95% of the coming days’ forecasts right, other areas of science don’t get it right all the time either. But should those scientist be branded with that lousy 5% margin of error? In my humble opinion, I’d rather reap the benefits of that 95% accuracy than let the 5% error bother me. So when some other professionals/scientists say something that I see to be important for all our future well fair here on Earth, I usually give it at least a once over before I decide whether to ignore it or not… At least a once over!

So I’ll finish here by saying… If most of you want us all to jump off the building because you think you can fly, there is no way on Earth (or in the air) that I’m gonna keep quiet and pretend that I can sustain this ‘flight’ while I’m hurtling past the windows of the building that we’ve all just jumped off from, just to keep the majority of you lot happy. Like I said… It’s not my style. My survival instinct is telling me that I want to survive, regardless of whether you do or not. And if I’m falling down – rather than flying down – with the rest of you, I’m gonna engage in some chit-chat on the way down about how to survive this fall.

. . .

. . .

Doomsday Book

Forty years ago, a highly controversial study warned that we had to curb growth or risk global meltdown. Was it right?

AT THE beginning of the 1970s, a group of young scientists set out to explore our future. Their findings shook a generation and may be even more relevant than ever today.

The question the group set out to answer was: what would happen if the world’s population and industry continued to grow rapidly? Could growth continue indefinitely or would we start to hit limits at some point? In those days, few believed that there were any limits to growth – some economists still don’t. Even those who accepted that on a finite planet there must be some limits usually assumed that growth would merely level off as we approached them.

.

In most runs of the World3 computer model, rapid growth is followed by sharp decline. So far the standard run (main graphic) corresponds well with measurements of real world equivalents (dotted lines).

.

These notions, however, were based on little more than speculation and ideology. The young scientists tried to take a more rigorous approach: using a computer model to explore possible futures. What was shocking was that their simulations, far from showing growth continuing forever, or even levelling out, suggested that it was most likely that boom would be followed by bust: a sharp decline in industrial output, food production and population. In other words, the collapse of global civilisation.

These explosive conclusions were published in 1972 in a slim paperback called The Limits to Growth. It became a bestseller – and provoked a furious backlash that has obscured what it actually said. For instance, it is widely believed that Limits predicted collapse by 2000, yet in fact it made no such claim. So what did it say? And 40 years on, how do its projections compare with reality so far?

The first thing you might ask is, why look back at a model devised in the days when computers were bigger than your fridge but less powerful than your phone? Surely we now have far more advanced models? In fact, in many ways we have yet to improve on World3, the relatively simple model on which Limits was based. “When you think of the change in both scientific and computational capabilities since 1972, it is astounding there has been so little effort to improve upon their work,” says Yaneer Bar-Yam, head of the New England Complex Systems Institute in Cambridge, Massachusetts.

It hasn’t happened in part because of the storm of controversy the book provoked. “Researchers lost their appetite for global modelling,” says Robert Hoffman of company WhatIf Technologies in Ottawa, Canada, which models resources for companies and governments. “Now, with peak oil, climate change and the failure of conventional economics, there is a renewed interest.”

The other problem is that as models get bigger, it becomes harder to see why they produce certain outcomes and whether they are too sensitive to particular inputs, especially with complex systems. Thomas Homer-Dixon of the University of Waterloo in Ontario, Canada, who studies global systems and has used World3, thinks it may have been the best possible compromise between over-simplification and unmanageable complexity. But Hoffman and Bar-Yam’s groups are now trying to do better.

World3 was developed at the Massachusetts Institute of Technology. The team took what was known about the global population, industry and resources from 1900 to 1972 and used it to develop a set of equations describing how these parameters affected each other. Based on various adjustable assumptions, such as the amount of non-renewable resources, the model projected what would happen over the next century.

The team compares their work to exploring what happens to a ball thrown upwards. World3 was meant to reveal the general behaviour that results – in the case of a ball, going up and then falling down – not to make precise predictions, such as exactly how high the ball would go, or where and when it would fall. “None of these computer outputs is a prediction,” the book warned repeatedly.

Assuming that business continued as usual, World3 projected that population and industry would grow exponentially at first. Eventually, however, growth would begin to slow and would soon stop altogether as resources grew scarce, pollution soared and food became limited. “The Limits to Growth said that the human ecological footprint cannot continue to grow indefinitely, because planet Earth is physically limited,” says Jørgen Randers of the Norwegian School of Management in Oslo, one of the book’s original authors.

What’s more, instead of stabilising at the peak levels, or oscillating around them, in almost all model runs population and industry go into a sharp decline once they peak. “If present growth trends in world population, industrialisation, pollution, food production and resource depletion continue unchanged, the limits to growth on this planet will be reached sometime within the next 100 years. The most probable result will be a sudden and rather uncontrollable decline in both population and industrial capacity,” the book warned.

This was unexpected and shocking. Why should the world’s economy collapse rather than stabilise? In World3, it happened because of the complex feedbacks between different global subsystems such as industry, health and agriculture. More industrial output meant more money to spend on agriculture and healthcare, but also more pollution, which could damage health and food production.

And most importantly, says Randers, in the real world there are delays before limits are understood, institutions act or remedies take effect. These delayed responses were programmed into World3. The model crashed because its hypothetical people did not respond to the mounting problems before underlying support systems, such as farmland and ecosystems, had been damaged.

Instead, they carried on consuming and polluting past the point the model world could sustain. The result was what economists call a bubble and Limits called overshoot. The impact of these response delays was “the fundamental scientific message” of the study, says Randers. Critics, and even fans of the study, he says, didn’t get this point.

The other message missed was that Limits was about how catastrophe could be averted. It did not say that humanity was doomed. In model runs where growth of population and industry were constrained, growth did level out rather than collapse – the stabilised scenario (see graph).

Yet few saw it this way. Instead, the book came under fire from all sides. Scientists didn’t like Limits because the authors, anxious to publicise their findings, put it out before it was peer reviewed. The political right rejected its warning about the dangers of growth. The left rejected it for betraying the aspirations of workers. The Catholic church rejected its plea for birth control.

Critical Points

The most strident criticisms came from economists, who claimed Limits underestimated the power of the technological fixes humans would surely invent. As resources ran low, for instance, we would discover more or develop alternatives.

Yet the Limits team had tested this. In some runs, they gave World3 unlimited, non-polluting nuclear energy – which allowed extensive substitution and recycling of limited materials – and a doubling in the reserves of nonrenewables that could be economically exploited. All the same, the population crashed when industrial pollution soared. Then fourfold pollution reductions were added as well: this time, the crash came when there was no more farmland.

Adding in higher farm yields and better birth control helped in this case. But then soil erosion and pollution struck, driven by the continuing rise of industry. Whatever the researchers did to eke out resources or stave off pollution, exponential growth was simply prolonged, until it eventually swamped the remedies. Only when the growth of population and industry were constrained, and all the technological fixes applied, did it stabilise in relative prosperity.

The crucial point is that overshoot and collapse usually happened sooner or later in World3 even if very optimistic assumptions were made about, say, oil reserves. “The general behaviour of overshoot and collapse persists, even when large changes to numerous parameters are made,” says Graham Turner of the CSIRO Ecosystem Sciences lab in Crace, Australia.

This did not convince those who thought technology could fix every problem. And with so much criticism, the idea took hold that Limits had been disproved. That mantra has been repeated so often that it became the received wisdom, says Ugo Bardi of the University of Florence in Italy, author of a recent book about Limits. “The common perception is that the work was discredited scientifically. I heard it again at a meeting last April,” says Homer-Dixon. “It wasn’t.”

It wasn’t just confusion. “Misunderstanding was enhanced by a media campaign very similar to the one that has been recently directed against climate science,” says Bardi.

One of the most common myths is that Limits predicted collapse by 2000. Yet as a brief glance at the “standard run” shows, it didn’t (see graph). The book does mention a 1970 estimate by the US Bureau of Mines that the world had 31 years of oil left. The bureau calculated this by dividing known reserves by the current rate of consumption. Rates of consumption, however, were increasing exponentially, so Limits pointed out that in fact oil had only 20 years left if nothing changed. But this calculation was made to illustrate the effects of exponential growth, not to predict that there were only 20 years of oil left.

When Matthew Simmons, a leading oil-industry banker, finally read Limits in the 1990s, he was surprised to find none of the false predictions he had heard about. On the contrary, he concluded, population and energy growth largely matched the basic simulation. He felt Limits got so much attention, then lost it, partly because the oil shock of 1973 focused minds on resource shortages that were then largely resolved.

There have been other recent re-appraisals of the book. In 2008, for instance, Turner did a detailed statistical analysis of how real growth compares to the scenarios in Limits. He concluded that reality so far closely matches the standard run of World3.

Does that mean we face industrial collapse and widespread death? Not necessarily. A glance at Turner’s curves shows we haven’t yet reached the stage of the standard run, later this century, when such events are predicted.

In the model, overshoot and collapse are preceded by exponential growth. Exponential growth starts out looking just like linear growth, says Bar-Yam: only later does the exponential curve start heading skywards. After only 40 years, we can’t yet say whether growth is linear or exponential.

We already know the future will be different from the standard run in one respect, says Bar-Yam. Although the actual world population up to 2000 has been similar, in the scenario the rate of population growth increases with time – one of the exponential drivers of collapse. Although Limits took account of the fact that birth rates fall as prosperity rises, in reality they have fallen much faster than was expected when the book was written. “It is reasonable to be concerned about resource limitations in fifty years,” Bar-Yam says, “but the population is not even close to growing [the way Limits projected in 1972].”

The book itself may be partly responsible. Bar-Yam thinks some of the efforts in the 1970s to cut population growth were at least partly due to Limits. “If it helped do that, it bought us more time, and it’s a very important work in the history of humanity,” he says.

Yet World3 still suggests we’ll hit the buffers eventually. The original Limits team put out an updated study using World3 in 2005, which included faster-falling birth rates. Except in the stabilising scenario, World3 still collapsed.

Otherwise, the team didn’t analyse the correspondence between the real world and their 1972 scenarios in detail – noting only that they generally match. “Does this correspondence with history prove our model was true? No, of course not,” they wrote. “But it does indicate that [our] assumptions and conclusions still warrant consideration today.”

This remains the case. Forty years on from its publication, it is still not clear whether Limits was right, but it hasn’t been proved wrong either. And while the model was too pessimistic about birth and death rates, it was too optimistic about the future impact of pollution. We now know that overshoot – the delayed response to problems that makes the effects so much worse – will eventually be especially catastrophic for climate change, because the full effects of greenhouse gases will not be apparent for centuries.

There will be no more sequels based on World3, though. The model can no longer serve its purpose, which was to show us how to avoid collapse. Starting from the current conditions, no plausible assumptions produce any result but overshoot. “There is no sense in only describing a series of collapse scenarios,” says Dennis Meadows, another of the original authors of Limits.

Randers, meanwhile, is editing a book called The Next Forty Years, about what we can do when limits start to bite. “I don’t like the resulting future, but it should be described, particularly because it would have been so easy to make a much better future,” he says.

The only hope is that we can invent our way out of trouble. Our ingenuity has allowed us to overcome many limits, says Homer-Dixon, and we can’t predict what revolutionary technological innovations humanity might come up with. Yet he is pessimistic: “The question is, can we deliver ingenuity at an increasing rate indefinitely.” Because that is what we’ll need to do if growth continues.

Instead of declaring we are doomed, or proclaiming that technology will save us, we should explore the future more rigorously, says Bar-Yam. We need better models. “If you think the scientific basis of those conclusions can be challenged, then the answer is more science,” he says. “We need a much better understanding of global dynamics.”

We need to apply that knowledge, too. The most important message of Limits was that the longer we ignore the problems caused by growth, the harder they are to overcome. As we pump out more CO2, it is clear this is a lesson we have yet to learn.

by Debora MacKenzie (who is a consultant for New Scientist based in Brussels, Belgium)

. . .

To find out where I sourced this article from, please click here.

While, to follow the author on Twitter, please click here to view her Twitter page.

OR to find out more about “The Limits To Growth”, please click here.

AND to buy an updated revision of “The Limits To Growth”, please click here.

Clumps of galaxies link together in clusters that resemble connections found within the brains of mamals.

In some ways it’s amazing… But in other ways, I wonder if I’m really surprised… ? I’ve been observing fractal patterns now for quite a few years in what many refer to as seemingly unrelated fields of occurrence i.e. hearing them in reverb simulations that I build within Max/MSP, OR while observing the patterns with which the Penicillium fungi grows on the bread that I want to avoid using in the toaster most mornings, through to markets and their ever shifting price-scapes… They’re everywhere. Yes… Everywhere.

They’ve even managed to naturally find their way into the experiential textures of my mind’s dynamic… Textures that the brain seems to weave together through strange attractor like eddies that occur between various nodes and hierarchical synaptic electrical discharges that fire so readily between various clusters within the brain’s overall structure… This in turn allows a type of consciousness to form i.e. myself, to perceive the material ‘aspects’ of the environment that I presently find myself in… ‘Aspects’ that are continually changing/moving/shifting. Most of these transformations are commonly seen as material changes i.e. day to night OR hot water turning into cold water… Changes that are forged from the same principles and materials i.e. atomic debris/fabric of the universe, that ‘I’ find myself a result of.

It’s amazing that ‘my’ five senses can somehow distinguish between these multifarious ‘aspects’ simply by observing the ever changing environmental interplay that unfolds in the world around me – and within me – allowing my body to cross-reference these abstractions (such as smell, sight, touch, taste, sound, etc…) into a functional braid of linear temporal registers that are plied together into a complex feedback loop of conscious awareness that correlates all of them into the fabric of experience. Through the natural evolution of this holographic image of universal dynamics – one that has been naturally selected for in most living organisms here on Earth in some manner or another – it’s pretty obvious that memetic evolution has given rise to – and certainly has benefited from – these unfolding fractal patterns of the mind, brain, body and environmental continuum… And, thus, so have I allowed myself – through much diligent study – to hang a myriad of meanings and socially accepted constructs onto the continuous flow of this biochemically experiential unfolding.

When I sit with this feeling, it seems very natural for everything to be just as it is… For us to be the way we are… Mortal, soft, delicate and changing… Prone to aging and death… Giving way to new progeny in an evolving loop of atomic re-awakening… And environmental readjustment/realignment… Suddenly it becomes okay to accept that one day I will die… And that my patterns of behavior will continue to ripple through the surrounding people I have met and the environment I once lived in, slowly being diluted, intermingling with other people’s activities, every evolving… Ever changing. Perhaps we don’t ever really die… ! Then I see that ‘I’ am not as free as many might imagine we are… Rather we are more willfully able to do whatever it is we choose to with the time we have here, acting within defined parameters of being… Operating to prolong our activities. I find acceptance in these limited modes… And I find true freedom in the limitless possibilities within my imagination. Just as chaos is limitless, and as the brain’s basis for functional ordering uses chaos to operate from… So I find myself not really being surprised that the universe ‘may’ have a fractal structure. When is see my lungs on a X-ray that had recently, there they are again… When I look at my arm closely and see the veins of blood flowing under my skin, fractal shapes come into focus… And I’m just amazed at the beauty of these patterns as they release their energetic uncoiling of potential energy into kinetic displays of wonder and marvel, spreading out over various timings into the delicately interconnected chaos of universal change.

Hydrodynamics simulation of the Rayleigh–Taylor instability

So what I thought was originally surprise… Has in fact turned out to be more of a sense of discovery… A rediscovery of my connectedness… My roots… My interlinked existence to everything – absolutely everything – around me. In many ways it has been an important rediscovery for me because this feeling of interconnectedness seems to have been masked over, obscured from obvious sight, by the daily meanderings of advertising, fictional drivel (mainly in the form of film and pulp fiction), political discussion, religious debate, scientific enquiry and general distraction, all of which seem to come from the supposed “perks” of Western modern day living…

But, thankfully, while immersing myself in this tangled mess of experiential twine – mainly by reading many, many scientific journals/publications over the last fifteen or so years, ones that concern themselves with how universal structure and function came into being (whether on the astrological and/or microscopic levels OR within the dynamics of the mind, brain, body, environmental continuum) – I’ve been unwittingly reconnecting myself with this feeling of interdependence. While closely keeping my eye on how the present theories (yes, theories, in the plural, because there are many of them out there) are continually evolving and changing… I’ve been unintentionally observing another form of natural selection at work… Much like Darwin did. One that is occurring within our minds. And, on the whole, it’s doing exactly what any good evolving form/system does i.e. works through the plethora of memetic constructs that are being formulated from experience by scrubbing the obviously impractical and blatantly cumbersome theories, revealing only the ones that best fit the observations. Then, while subjecting these selected few to yet more stringent tests, each idea/theory is further developed… OR revealed to be a fraud. Eventually one idea/theory in particular is found… One that fits better than all the rest. One that can generate self-similar observed data by repeating the experiments over and over again. This idea/theory then becomes a sort of fact… One that can be expounded further into more developed and concise levels of understanding… Where each idea/theory can interconnect and interrelate to other seemingly unrelated areas of scientific inquiry. Time and again, further cross-referencing and testing ensues, scrutinizing each novel idea/theory/notion… If one doesn’t fit, it is then modified, tweaked, or reconfigured to work into the overall account produced thus far… OR EVEN, if an idea is so obvious, then the other areas might find themselves being revised. This continues ad-infinitum, moving even onwards into finer details… Heading towards the vanishing point of a complexity that knows no bounds… A sort tailor made fitting for a more concise scientific understanding that will never be found.

In fact… So to does the evolution of animal form work in much the same way… As Professor Armand Marie Leroi states, “Species give rise to other species, and as they do so, they change. The changes are minute and subtle, but given enough time, the results could be spectacular. And so they are!” So to do our mind streams change and evolve over time… Allowing us to see more clearly whatever it is we are looking at.

. . . . . . . .

What Darwin didn’t Know

Documentary which tells the story of evolution theory since Darwin postulated it in 1859 in ‘On the Origin of Species’.

The theory of evolution by natural selection is now scientific orthodoxy, but when it was unveiled it caused a storm of controversy, from fellow scientists as well as religious people. They criticised it for being short on evidence and long on assertion and Darwin, being the honest scientist that he was, agreed with them. He knew that his theory was riddled with ‘difficulties’, but he entrusted future generations to complete his work and prove the essential truth of his vision, which is what scientists have been doing for the past 150 years.

Evolutionary biologist Professor Armand Marie Leroi charts the scientific endeavour that brought about the triumphant renaissance of Darwin’s theory. He argues that, with the new science of evolutionary developmental biology (evo devo), it may be possible to take that theory to a new level – to do more than explain what has evolved in the past, and start to predict what might evolve in the future.

. . . . . . . .

As time has gone on, I’ve been fortunate enough to rediscover how similar basic patterns permeate almost every single aspect of our lives as human beings… This rediscovery – for me at least – occurred because I had the fortunate experience of studying many dynamical systems for musical analogy… That is, I studied them over and over again, looking at how to translate these natural never-ending patterns into sonic textures for art’s sake. When you see them, though, you begin to spot them everywhere you care to look. It’s almost like it’s so obvious that they’re there, just staring us in the face, that because of it, we just haven’t noticed them… They’ve always been there… In plain sight. So why would we notice them? In some ways it’s just like when the astronauts of Apollo 11 landed to the moon for the first time… When they got there, they couldn’t see any trace of the Earth around them anymore. Their home of a planet was now just a beautiful jewel hanging in the moon’s inky black sky, just out of their reach. Everything that they had taken for granted i.e. an abundance of air, all the trees, plants, life, all the oceans of water, our homes, the people they loved, movies, the abundance of food, animals, clouds, rain, wind, etc… They just weren’t there around them anymore… And it stood out like a soar thumb as to how fortunate they were to live on a planet that had all those things… Things that were so common on Earth. This voyage to the moon profoundly changed the way they i.e Neil Armstrong, Edwin “Buzz” Aldrin Jr and pilot Michael Collins saw the Earth afterwards. In fact it changed ever astronaut who ever went to the moon’s perspective… So that when they returned, they couldn’t help but wonder why people couldn’t see what they now could see so clearly i.e. how precious the Earth is and all the beings that live on it… How connected we all are to one another… To everything around us… How much we need our planet… And how futile all our wars and disagreements are in the greater scheme of everything.

Something similar is going on in science now… Over the last year or so I’ve been coming across many publications wherein scientists are seemingly wanting to let go of some of their earlier preconceptions about how the textbook ideals – one’s which their contemporaries wrote down with absolute certitude for their students to learn from – concerning universal flow and other areas of scientific interest, don’t really quite fit with what these students are actually observing in the “real world…” And along with how they are having to “pull-out-of-the-hat” seemingly bizarre concepts, such as dark matter, in order to balance their predecessors equations… Many are beginning to feel that it’s time to evolve again. Thus it can be noticed that many of the new generation of scientists are looking for novel ideas to re-evaluated what they have learned… And as the models get more and more complex, so to do we see that complexity needs to be better understood… Revealing many types of fractal structures and all sorts of non-linear dynamics residing within the natural flow of universal unfolding.

As I have mentioned before in several blogs contained in this website… Until fractal/chaotic dynamics are properly introduced and included into the equations of physicists, chemists, biologists, psychologists, etc… There will always be a thin vale of mist that detaches their efforts from discovering the true order of things. For, until this time, discrepancies and vague approximations on how universal flow actually functions will cloud the depth of understanding that lies waiting to be seen beneath this mist.

Saying that… There are those who are already daring to go beyond… As Francesco Sylos Labini clearly demonstrates with his intuitive proposition below… The universe may have a fractal structure…

. . . . . . . .

Largest Cosmic Structures ‘Too Big’ For Theories

Space is festooned with vast “hyperclusters” of galaxies, a new cosmic map suggests. It could mean that gravity or dark energy – or perhaps something completely unknown – is behaving very strangely indeed.

We know that the universe was smooth just after its birth. Measurements of the cosmic microwave background radiation (CMB), the light emitted 370,000 years after the big bang, reveal only very slight variations in density from place to place. Gravity then took hold and amplified these variations into today’s galaxies and galaxy clusters, which in turn are arranged into big strings and knots called superclusters, with relatively empty voids in between.

On even larger scales, though, cosmological models say that the expansion of the universe should trump the clumping effect of gravity. That means there should be very little structure on scales larger than a few hundred million light years across.

But the universe, it seems, did not get the memo. Shaun Thomas of University College London (UCL), and colleagues have found aggregations of galaxies stretching for more than 3 billion light years. The hyperclusters are not very sharply defined, with only a couple of per cent variation in density from place to place, but even that density contrast is twice what theory predicts.

“This is a challenging result for the standard cosmological models,” saysFrancesco Sylos Labini of the University of Rome, Italy, who was not involved in the work.

Colour guide

The clumpiness emerges from an enormous catalogue of galaxies called the Sloan Digital Sky Survey, compiled with a telescope at Apache Point, New Mexico. The survey plots the 2D positions of galaxies across a quarter of the sky. “Before this survey people were looking at smaller areas,” says Thomas. “As you look at more of the sky, you start to see larger structures.”

A 2D picture of the sky cannot reveal the true large-scale structure in the universe. To get the full picture, Thomas and his colleagues also used the colour of galaxies recorded in the survey.

More distant galaxies look redder than nearby ones because their light has been stretched to longer wavelengths while travelling through an expanding universe. By selecting a variety of bright, old elliptical galaxies whose natural colour is well known, the team calculated approximate distances to more than 700,000 objects. The upshot is a rough 3D map of one quadrant of the universe, showing the hazy outlines of some enormous structures.

Coagulating dark energy

The result hints at some profound new physical phenomenon, perhaps involving dark energy – the mysterious entity that is accelerating the expansion of space. Dark energy is usually assumed to be uniform across the cosmos. If instead it can pool in some areas, then its repulsive force could push away nearby matter, creating these giant patterns.

Alternatively, we may need to extend our understanding of gravity beyond Einstein’s general theory of relativity. “It could be that we need an even more general theory to explain how gravity works on very large scales,” says Thomas.

A more mundane answer might yet emerge. Using colour to find distance is very sensitive to observational error, says David Spergel of Princeton University. Dust and stars in our own galaxy could confuse the dataset, for example. Although the UCL team have run some checks for these sources of error, Thomas admits that the result might turn out to be the effect of foreground stars either masking or mimicking distant galaxies.

Fractal structure?

“It will be essential to confirm this with another technique,” says Spergel. The best solution would be to get detailed spectra of a large number of galaxies. Researchers would be able to work out their distances from Earth much more precisely, since they would know how much their light has been stretched, or red-shifted, by the expansion of space.

Sylos Labini has made such a map using a subset of Sloan data. It reveals clumpiness on unexpectedly large scales – though not as vast as these. He believes that the universe may have a fractal structure, looking similar at all scales.

A comprehensive catalogue of spectra for Sloan galaxies is being assembled in a project called the Baryon Oscillation Spectroscopic Survey. Meanwhile, the Dark Energy Survey will use a telescope in Chile to measure the colours of even more galaxies than Sloan, beginning in October. Such maps might bring hyperclusters out of the haze – or consign them to the status of monstrous mirage.

by Stephen Battersby

Journal reference: Physical Review Letters, DOI: 10.1103/PhysRevLett.106.241301

. . . . . . . .

For some continued viewing on the subject, please watch the following BBC documentary entitled, “The Secret Life Of Chaos”.

The Secret Life Of Chaos

Chaos theory has a bad name, conjuring up images of unpredictable weather, economic crashes and science gone wrong. But there is a fascinating and hidden side to Chaos, one that scientists are only now beginning to understand.

It turns out that chaos theory answers a question that mankind has asked for millennia – how did we get here?

In this documentary, Professor Jim Al-Khalili sets out to uncover one of the great mysteries of science – how does a universe that starts off as dust end up with intelligent life? How does order emerge from disorder?

It’s a mindbending, counterintuitive and – for many people – a deeply troubling idea. But Professor Al-Khalili reveals the science behind much of beauty and structure in the natural world and discovers that far from it being magic or an act of God, it is in fact an intrinsic part of the laws of physics. Amazingly, it turns out that the mathematics of chaos can explain how and why the universe creates exquisite order and pattern.

And the best thing is that one doesn’t need to be a scientist to understand it. The natural world is full of awe-inspiring examples of the way nature transforms simplicity into complexity. From trees to clouds to humans – after watching this film you’ll never be able to look at the world in the same way again.

. . . . . . . .

To find out where I sourced this article from, please click here.

Or to find out where the BBC documentaries originally came from, please click here and/or here.

I’ve always wondered what it must be like to become a Buddha. And no doubt, whenever I’ve vocalised this interest, I’ve usually been told that pondering over such things is a waste of one’s time and effort… Being comparable to a frog who has only seen a pond, and then tries to imagine (or even romanticise about) what an ocean must be like, having only been told by a friend who has seen it how it looks… I mean… How can one’s hemmed in, limited memetic view break free from the shackles of narrow mindedness, so as to truly see what lies beyond all description… Pure experience, devoid of any imposed, thought out, cognisant meaning… What might even be aptly described as perfect ‘omniscience?’

Well I’ve looked into the Mandelbrot set‘s wondrous coiling rhythm on many late night journeys… Zooming into it’s infinite boundary, confined in a finite space… And I feel that there is some resonance with how a Buddha must feel and/or see everything i.e. knowing perfectly all the chains of cause and effect as they stretch into the past and future. To see so clearly interdependence, and to be free of the limiting notion of time, must allow for such great wisdom… Especially if the right type of universal compassion is generated to guide one’s efforts here on Earth… For, we’ve already seen how easy it is for one to slip into self-justified views of understanding that seem to only protect our own vested, self-important interests (see “Manufacturing Consent“).

But, I digress… Because what I’m really interested in here is… How can/could a Buddha see into the future… ? I mean, if it really can be done, then surely there would be some type of hint at how this process might well become a reality for those of us who are Buddhas? Well, apparently, now there is some evidence coming through that suggests as much… Although, it might well need more understanding and processing before it can be properly taken as a real process.

Is This Evidence That We Can See The Future?

Extraordinary claims don’t come much more extraordinary than this: events that haven’t yet happened can influence our behaviour.

Parapsychologists have made outlandish claims about precognition – knowledge of unpredictable future events – for years. But the fringe phenomenon is about to get a mainstream airing: a paper providing evidence for its existence has been accepted for publication by the leading social psychology journal.

What’s more, sceptical psychologists who have pored over a preprint of the paper say they can’t find any significant flaws. “My personal view is that this is ridiculous and can’t be true,” says Joachim Krueger of Brown University in Providence, Rhode Island, who has blogged about the work on thePsychology Today website. “Going after the methodology and the experimental design is the first line of attack. But frankly, I didn’t see anything. Everything seemed to be in good order.”

Critical Mass

The paper, due to appear in the Journal of Personality and Social Psychology before the end of the year, is the culmination of eight years’ work by Daryl Bem of Cornell University in Ithaca, New York. “I purposely waited until I thought there was a critical mass that wasn’t a statistical fluke,” he says.

It describes a series of experiments involving more than 1000 student volunteers. In most of the tests, Bem took well-studied psychological phenomena and simply reversed the sequence, so that the event generally interpreted as the cause happened after the tested behaviour rather than before it.

In one experiment, students were shown a list of words and then asked to recall words from it, after which they were told to type words that were randomly selected from the same list. Spookily, the students were better at recalling words that they would later type.

In another study, Bem adapted research on “priming” – the effect of a subliminally presented word on a person’s response to an image. For instance, if someone is momentarily flashed the word “ugly”, it will take them longer to decide that a picture of a kitten is pleasant than if “beautiful” had been flashed. Running the experiment back-to-front, Bem found that the priming effect seemed to work backwards in time as well as forwards.

‘Stroke Of Genius’

Exploring time-reversed versions of established psychological phenomena was “a stroke of genius”, says the sceptical Krueger. Previous research in parapsychology has used idiosyncratic set-ups such as Ganzfeld experiments, in which volunteers listen to white noise and are presented with a uniform visual field to create a state allegedly conducive to effects including clairvoyance and telepathy. By contrast, Bem set out to provide tests that mainstream psychologists could readily evaluate.

The effects he recorded were small but statistically significant. In another test, for instance, volunteers were told that an erotic image was going to appear on a computer screen in one of two positions, and asked to guess in advance which position that would be. The image’s eventual position was selected at random, but volunteers guessed correctly 53.1 per cent of the time.

That may sound unimpressive – truly random guesses would have been right 50 per cent of the time, after all. But well-established phenomena such as the ability of low-dose aspirin to prevent heart attacks are based on similarly small effects, notes Melissa Burkley of Oklahoma State University in Stillwater, who has also blogged about Bem’s work at Psychology Today.

Respect For A Maverick

So far, the paper has held up to scrutiny. “This paper went through a series of reviews from some of our most trusted reviewers,” says Charles Judd of the University of Colorado at Boulder, who heads the section of the Journal of Personality and Social Psychology editorial board that handled the paper.

Indeed, although Bem is a self-described “maverick” with a long-standing interest in paranormal phenomena, he is also a respected psychologist with a reputation for running careful experiments. He is best known for the theory ofself-perception, which argues that people infer their attitudes from their own behaviour in much the same way as they assess the attitudes of others.

Bem says his paper was reviewed by four experts who proposed amendments, but still recommended publication. Still, the journal will publish a sceptical editorial commentary alongside the paper, says Judd. “We hope it spurs people to try to replicate these effects.”

One failed attempt at replication has already been posted online. In this study, Jeff Galak of Carnegie Mellon University in Pittsburgh, Pennsylvania, and Leif Nelson of the University of California, Berkeley, employed an online panel called Consumer Behavior Lab in an effort to repeat Bem’s findings on the recall of words.

Bem argues that online surveys are inconclusive, because it’s impossible to know whether volunteers have paid sufficient attention to the task. Galak concedes that this is a limitation of the initial study, but says he is now planning a follow-up involving student volunteers that will more closely repeat the design of Bem’s word-recall experiment.

This seems certain to be just the first exchange in a lively debate: Bem says that dozens of researchers have already contacted him requesting details of the work.

by Peter Aldhous

To find out where I sourced this article from, please click here.

And to find out more about the author of the article, please visit his website by clicking here.

To learn more about the New Scientist magazine, please click here and visit their website.

The other week I was pondering over the immensely complex notion of Karma… Over the last year or so I have spoken to several well versed Buddhist practitioners about what Karma is exactly… And during our discussions I couldn’t help but notice one comment that cropped up time and again with each of them. Usually I wouldn’t have thought that much about it if they had known each other… Or even if they had had the same teacher… However, each of these practitioners were from very different Buddhist “schools” and did not even share any of the same teachers. Thus, when they said what they said, I knew that it was something to heed, to take note of…

What they said was this… “If you think, at any time, that you understand what Karma is… Then the chances are that you don’t.” This important point stuck with me… Leaving me somewhat humbled in my unenlightened state of mind, and I became very cautious about using basic concepts to describe something that was probably unfathomable to someone like myself… Either that, or it shifted so subtly, but surely, from one situation to the next, that never could it become a definite, text-book like certitude, let alone a conceptual understanding. While turning this over and over again in my mind, I found myself remembering how chaos once seemed when I first came across it earlier in the Lorenz attractor… A sort of knowledge that some system existed within certain parameters, and yet, one could never quite predict exactly what it was going to do next… Or in the case of Karma, one could perhaps never quite discern the outcome – probably due to the inherent complexity of all the factors within the dynamics of the system – of life.

Whether or not I will ever get a deeply intuitive grasp of Karma – one that is devoid of any conceptual “boxing-in” or limiting notions – has yet to be seen. However, just the other week I stumbled across this article in the New Scientist magazine… And I felt that somewhere in there, one could see how the nature of mind – via a type of memetic understanding – might allow/explain how such a notion as Karma might unfold and affect individuals within a social group OR a social dynamic. Perhaps having read some of the earlier blogs contained within the pages of this website, it might well be seen that human beings, on the whole, are easily be swayed into doing things that are untoward to their fellow sentient beings here on Earth. And here, in the marmot case study, we can again see that even animals are prone to inheriting social behaviour from one other, just like humans seem to copy their actions from each other… Spreading memes from one to another.

Another thing that the Buddhist practitioners whom I spoke with mentioned, was that we all had a chance to change our Karma. Perhaps this is what We – as human beings – now need to address, especially as our excuse for predation pressure no longer really applies to our present state of cultural existence. Once we wholly grasp that what we do to others is, in a way, memetically programming others – predisposing them to perform similar type actions within their social groupings – then perhaps we might well see that a wholesome evolution lies with mindful awareness of how unique each social situation really is… And how we should be so aware of every action that we perform in front of anyone else. Then, with this mindful sense of interconnectedness, perhaps we can begin to evolve beyond the old scores of “tit-for-tat” i.e. such as the Palestinian/Israeli conflict, and weave a new dream of open hearted connection that inspires balance and peace, free from violence and a need to be avenged… Making the notions of war, self-centred importance and greed obsolete. Then we can side-step any problems that might be looming in the supposed end game.

The Primitive Social Network: Bullying Required

Someone gets bullied in every society. It’s bad luck on the victims, but in primitive social groups they might do best to put up with it. If the advantages of group living outweigh the costs of being bullied, evolution might leave some animals resigned to their victim status, thus stabilising the group.

To find out if this is so, Daniel Blumstein of the University of California, Los Angeles, and colleagues studied a population of yellow-bellied marmots living in the Rocky mountains in Colorado. These large rodents have a primitive society: they live in fixed groups, but do not cooperate in the way that many primates and other highly social animals do.

Facebook For Marmots

Blumstein’s team monitored them between 2003 and 2008, keeping records of who interacted with whom and so building up a social network for the group. They also mapped the marmots’ family relationships. By putting the two datasets together, they worked out whether the marmots inherited their social behaviour and positions from their parents.

To their surprise, they found that marmots did not inherit social behaviours that they performed themselves, but they did inherit actions that others performed towards them. “The things they do to others are not inherited, but the things that others do to them are,” Blumstein says. In particular, “the tendency to be victimised is inherited”.

What’s more, well-connected marmots lived longer and reproduced more, even if their social connections put them on the receiving end of aggression. “Interacting with others is valuable, even if the interactions are nasty,” Blumstein says.

Inherited Victimisation

“It’s a surprising result, and I’m not entirely sure how to explain it,” says Julia Lehmann of Roehampton University in London, who was not involved in the study.

Lehmann thinks that animals form groups because sticking together reduces the risk from predators. “As long as the predation pressure keeps up, the group stays together,” she says.

As a result, low-ranking marmots might evolve to cope with being victimised, because it’s better than being eaten. “Staying alive is the most important thing,” Lehmann says.

Blumstein thinks researchers have focused too much on friendly interactions when they study how groups evolved. “We need to think more about the role of aggression,” he says.

by Michael Marshall

Journal reference: Proceedings of the National Academy of Sciences, DOI: 10.1073/pnas.1009882107

To find out where I sourced this article from, please click here.

To find out more about the author of this article, please click here.

OR to visit the author’s website, please do so by clicking here.

To find out more about the New Scientist magazine, please visit their home page by clicking here.

Having recently been to Dr Bruce Lipton‘s talk, entitled “The Biology Of Belief,” which was held in the Logan Hall of the Institute Of Education in London this last Saturday, the 17th of July 2010, I had reinforced the idea that we are nothing more than a bunch of atomic mechanisms, built from atomic polymers i.e. DNA, proteins, fatty acids, etc… all arranged into intricate cellular clusters, which – given the right circumstances – can function with amazingly natural flows of Being, demonstrating what we can only call, from a self referencing point of view, natural organic movements… And over the years we have – funnily enough – coined these flows to be “Life-Like.”

I really believe that when we begin to see Life in these terms i.e. that Life as we presently know it usually results from the complex interactions of the atomic machinery within an enclosed cellular body, which, if presented with more differentiated versions of itself, can build larger bodies from highly specialised cellular clusters… And then, once in place, out of all this unfolds a nonlinear biology/biochemistry of perceptive functions, all of which came about through the process of what we now know as ‘chaos’ – rather than the result of some divine intervention – and thus becomes nothing more than a complex, naturally occurring chaotic system that ‘intelligently’ reacts and responds, through effective behavioural patterns, to external environmental pressures and stimuli, precipitating survival habits that have been natural selected for… The behavioural patterns allow Life to survive in an ever changing environment, and the chaos inherent in our being affords us the ability to utilise the best survival traits that we can, one of which was the development of self-biased tendencies centred around a distinct notion of “self” and “body” that many of us seem to take for granted on a daily basis.

While I will eventually get around to discussing the reality and validity of the “self” in a future blog (something that is taking me much longer than I had anticipated)… In this blog I’d like try to discuss why this idea of viewing ourselves as a machine is a lot more natural and effective a notion about our “selves” than any previous egocentric notion about what we really are i.e. we were created by one or several Gods, in their own images to be special, etc… Certainly Dr Bruce Lipton’s analogy about us being a group of living cells which function within the confines of this body as a “community” of beings, each performing their own specific roles within the body’s mechanism i.e. just as governments regulate countries and their home economies, while police men arrest criminals, so do certain parts of the central nervous system function as regulators of heart rhythm and bodily temperature, while white blood cells kill of infections from ‘maliciously behaved’ bacteria… This idea of self-similarity within the patterns of Life that we see unfolding here on Earth across all scales and modes of Being will provide us with a very deep and intuitive understanding about the subtle and – what we tend to call – divine aspects of our Being, as well as showing us all how we interconnect and relate to this universally unfolding discourse..

Bearing in mind this ‘rule’ of self-similarity that seems to present itself within and throughout the whole of this universal dynamic so pervasively… And by viewing Life as a type of mechanisation… I am curious as to where – or from which level of scale – the emotive force of Life actually originates from? Is it at the level of the body i.e. does it directly and uniquely come from the sum of all its parts, where each individual part would be able to do nothing whatsoever by itself? Or is this trait of the emotive Life force buried deep down with in the cellular – or even the atomic – matrix? Certainly when we try to address what this experience of Life actually is and how it comes about we can hopefully begin to see it does not only belong to the body as a whole unit, but also comes from the various levels of functionality within the body i.e. at the cellular and atomic levels.

Just as Jung is concerned as much with the individual within society, as the individual is him/her “self” the measure of society, so too we can apply this analogy to the cell and body. Without the individual, society cannot function, let alone exist… And without the cell, the body cannot function or even exist. Life and its dynamism directly stems from the units that comprise the whole. These units, just as much as the whole, are all subject to the same forces and methods of development as each other i.e. those of nonlinear evolution. This ‘Life,’ and its essence, relies upon the parameters of these nonlinear, fractal eddies with their dynamics. These cellular bodies that make up our own larger bodies are driven by and made from the same underlying principles of naturally occurring algorithmic phenomena… Even though at first glance it might not be obvious that they are… But they are. Thus, if these algorithmic patterns reside across all levels of scale, shape and form, why shouldn’t we expect similar ‘intelligences’ to reside across all scales of these naturally occurring systems, whether at the human body’s level or a cellular level? Ultimately it’s up to you what you believe… But to function better I personally would like to know a little bit more about the processes that give rise this “I”; the processes that drive all of Life here on Earth – and possibly beyond too – rather than giving into dogmatic nodes of parrot fashioned understanding.

As Jung once wrote in “The Undiscoverd Self“:

Human knowledge consists essentially in the constant adaptation of the primordial patterns of ideas that were given us a priori. These need certain modifications, because, in their original form, they are suited to an archaic mode of life but not to the demands of a specifically differentiated environment. If the flow of instinctive dynamism into our life is to be maintained, as is absolutely necessary for our existence, then it is imperative that we remould these archetypal forms into ideas which are adequate to the challenge of the present.

. . . . . . . .

Our denominational religions with their archaic rites and conceptions – justified enough in themselves – express a view of the world which caused no great difficulties in the Middle Ages but has become strange and unintelligible to the man of today. Despite this conflict with the modern scientific outlook, a deep instinct bids him hang on to ideas which, if taken literally, leave out of account all the mental developments of the last five hundred years. The obvious purpose of this is to prevent him from falling into the abyss of nihilistic despair. But even when, a rationalists, we feel impelled to criticise contemporary religion as literalistic, narrowminded and obsolescent, we should never forget that the creeds proclaim a doctrine whose symbols, although their interpretation may be disputed, nevertheless possess a life of their own on account of their archetypal character. Consequently, intellectual understanding is by no means indispensable in all cases, but is called for only when evaluation through feeling and intuition does not suffice, that is to say, with people for whom the intellect holds the prime power of conviction.

In order to emphasise this re-equation that we need i.e. to understand that we are part of a whole ecosystem of Earth, just as a cell is part of the body’s ecosystem, it is here that I’d like to present an article which I read not too long ago in the New Scientist magazine… One that tackles this issue of where emotive Life comes from. When we see that Life’s organic flow resides across all levels of being i.e. atomic, cellular, bodily, biospherically, or even within the planet and its solar system, we might begin to understand that some of our older religious notions of the divine state of existence that We – that is, all Life – experience no longer need to be fantasised over or marginalised in any inaccurate way whatsoever. Now, through the doors of science, we can directly see the mechanisms of Life at work, and thus ‘understand’ the essence behind their patterns and interdependent interactions, all through which we gain the essence of our Being. Natural ordering comes from the patterns of chance and chaos, which give rise to development and originality within all universal systems, whether biological or otherwise. These systems, if given favourable circumstances/environments in which to start, can then begin the arduous process of developing into complex systems of environmentally perceptive and adaptive systems. Human beings are even beginning to use these recursive patterns – which have been called the “Thumb Print Of God” – in their technological developments i.e. to develop semi intelligent robotic systems that can learn fast and develop effective solutions to presented problems in ways that surpass anything we’ve tried or known before.

Thus, with these many new observations, I believe it is time to re-write our archetypal programming. Just as when I first saw the Mandelbrot Set on a postcard from a friend while at school and immediately recognised its tortuous, writhing flow as something so familiar and deeply ingrained in my being… So too do all ‘Gods’ leave this same feeling of familiarity… Of spirituality… And of deep connection to the whole… Here lies an answer to a new understanding… That self-similarity resides within all units of the whole… If you find intelligence within the body… Then why not within cell too… Or even in the atom… After all, one essence is usually found within the other, and so permeates through the entire being. Certainly atoms are just as discerning as human beings are… We all choose what we will or won’t react/socialise/breed with. Does this intelligence then go deeper? Intelligence that can be found within the proton, neutron and/or electron… And, if so, then why not even in the quark… Or the God particle…. Etc, etc, etc… ?

The Secrets Of Intelligence Lie Within A Single Cell

Late at night on a sultry evening, I watch intently as the predator senses its prey, gathers itself, and strikes. It could be a polecat, or even a mantis – but in fact it’s a microbe. The microscopic world of the single, living cell mirrors our own in so many ways: cells are essentially autonomous, sentient and ingenious. In the lives of single cells we can perceive the roots of our own intelligence.

Molecular biology and genetics have driven the biosciences, but have not given us the miraculous new insights we were led to expect. From professional biologists to schoolchildren, people are concentrating on the minutiae of what goes on in the deepest recesses of the cell. For me, however, this misses out on life in the round: it is only when we look at the living cell as a whole organism that wonderful realities emerge that will alter our perception not only of how single cells enact their intricate lives but what we humans truly are.

The problem is that whole-cell biology is not popular. Microscopy is hell-bent on increased resolution and ever higher magnification, as though we could learn more about animal behaviour by putting a bacon sandwich under lenses of increasing power. We know much about what goes on within parts of a cell, but so much less about how whole cells conduct their lives.

Currently, cell biology deals largely with the components within cells, and systems biology with how the components interact. There is nothing to counterbalance this reductionism with a focus on how whole cells behave. Molecular biology and genetics are the wrong sciences to tackle the task.

Let’s take a look at some of the evidence for ingenuity and intelligence in cells that is missing from the curriculum. Take the red algae Rhodophyta, in which many species carry out remarkable repairs to damaged cells. Cut a filament of Antithamnion cells so the cell is cut across and the cytoplasm escapes into the surrounding aquatic medium. All that remains are two fragments of empty, disrupted cell wall lying adjacent to, but separate from, each other. Within 24 hours, however, the adjacent cells have made good the damage, the empty cell space has been restored to full activity, and the cell walls meticulously realigned and seamlessly repaired.

The only place where this can happen is in the lab. In nature, the broken ends of the severed cell would nearly always end up remote from each other, so selection in favour of an automatic repair mechanism through Darwinian evolution would be impossible. Yet something amazing is happening here: because the damage to the Antithamnion filament is unforeseeable, the organism faces a situation for which it has not been able to adapt, and is therefore unable to call upon inbuilt responses. It has to use some sort of problem-solving ingenuity instead.

We regard amoebas as simple and crude. Yet many types of amoeba construct glassy shells by picking up sand grains from the mud in which they live. The typical Difflugia shell, for example, is shaped like a vase, and has a remarkable symmetry.

Compare this with the better known behaviour of a caddis fly larva. This maggot hunts around the bottom of the pond for suitable scraps of detritus with which to construct a home. Waterlogged wood is cemented together with pondweed until the larva has formed a protective covering for its nakedness. You might think this comparable to the home built by the testate amoeba, yet the amoeba lacks the jaws, eyes, muscles, limbs, cement glands and brain the caddis fly larva relies on for its skills. We just don’t know how this single-celled organism builds its shell, and molecular biology can never tell us why. While the home of the caddis fly larva is crude and roughly assembled, that of the testate amoeba is meticulously crafted – and it’s all made by a single cell.

The products of the caddis fly larva and the amoeba, and the powers of red algae, are about more than ingenuity: they pose important questions about cell intelligence. After all, whole living cells are primarily autonomous, and carry out their daily tasks with little external mediation. They are not subservient nanobots, they create and regulate activity, respond to current conditions and, crucially, take decisions to deal with unforeseen difficulties.

“Whole living cells are not subservient nanobots, they respond and take decisions”

Just how far this conceptual revolution about cells could take us becomes clearer with more complex animals, such as humans. Here, conventional wisdom is that everything is ultimately controlled by the brain. But cells in the liver, for example, reproduce at just the right rate to replace cells lost through attrition; follicular cells create new hair; bone marrow cells produce new circulating blood cells at a rate of millions per minute. And so on and on. In fact, around 90 per cent of this kind of cell activity is invisible to the brain, and the cells are indifferent to its actions. The brain is an irrelevance to most somatic cells.

So where does that leave the neuron, the most highly evolved cell we know? It ought to be in an interesting and privileged place. After all, neurons are so specialised that they have virtually abandoned division and reproduction. Yet we model this cell as little more than an organic transistor, an on/off switch. But if a red alga can “work out” how to solve problems, or an amoeba construct a stone home with all the “ingenuity” of a master builder, how can the human neuron be so lowly?

Unravelling brain structure and function has come to mean understanding the interrelationship between neurons, rather than understanding the neurons themselves. My hunch is that the brain’s power will turn out to derive from data processing within the neuron rather than activity between neurons. And networks of neurons enhance the effect of those neurons “thinking” between themselves. I think the neuron’s action potentials are rather like a language neurons use to transmit processed data from one to the next.

Back in 2004, we set out to record these potentials, from neurons cultured in the lab. They emit electrical signals of around 40 hertz, which sound like a buzzing, irritating noise played back as audio files. I used some specialist software to distinguish the signal within the noise – and to produce sound from within each peak that is closer to the frequency of a human voice and therefore more revealing to the ear.

Listening to the results reprocessed at around 300 Hz, the audio files have the hypnotic quality of sea birds calling. There is a sense that each spike is modulated subtly within itself, and it sounds as if there are discrete signals in which one neuron in some sense “addresses” another. Could we be eavesdropping on the language of the brain?

For me, the brain is not a supercomputer in which the neurons are transistors; rather it is as if each individual neuron is itself a computer, and the brain a vast community of microscopic computers. But even this model is probably too simplistic since the neuron processes data flexibly and on disparate levels, and is therefore far superior to any digital system. If I am right, the human brain may be a trillion times more capable than we imagine, and “artificial intelligence” a grandiose misnomer.

I think it is time to acknowledge fully that living cells make us what we are, and to abandon reductionist thinking in favour of the study of whole cells. Reductionism has us peering ever closer at the fibres in the paper of a musical score, and analysing the printer’s ink. I want us to experience the symphony.

by Brian J. Ford

Despite the authors final sentiments, I still feel that this reductionism does provide us with certain, otherwise unobtainable, clarities for understanding the similarities between the processes within and without… After all, one needs to know how to make paper and ink, and understand something about the musical scoring technique before they can write a symphony down for the future enjoyment of others…

To find out where I sourced this article from, please click here.

And to learn more about Dr Bruce Lipton and some of the brilliant work he is doing, please click here.

So can simply a sense of touch actually influence our decisions in the everyday world!?!? I mean, if you’re touching something nice and fluffy, like a fluffy teddy bear holding a big heart, while making a decision about going to war with a nation of people who apparently blew up your “World Trade Centre…” Could this ‘fluffiness’ actually make you choose a softer route of attack? Like… Could it make you donate loads of money to your ‘enemies’ for better education and health services in their region, and thus promote a healthy relationship, rather than fuelling an already raging fire with more bombs and death? Are we really that sensitive to external sensory stimulus!?

No doubt our sensitivity to all external stimuli while we make everyday choices in the world around us has been discussed before here… We’ve even discussed how we let advertisers into our heads… I’ve even tried to show how unaware we are of external influences, espeically when making decisions about our conduct after having been exposed to “power” related stimuli, such as money!?!? I know… It’s mad. Perhaps Al Qaeda would be better sending all their “fluffy” toys to The Pentagon rather than envelopes containing anthrax OR threats about bombing cities, etc… to influence the powers of the mighty US of A. Obviously fighting fire with fire isn’t going to make things any cooler.

And here again, in a great little New Scientist article, we can see how we might be unwittingly influenced into making opinions based on our present circumstances, rather than providing unbiased sentiments solely from a detached and informed perspective.

ARE you sitting comfortably? It could affect your impression of this story. So say researchers who have shown that tactile sensations can influence the judgements we make in everyday situations.

Joshua Ackerman at the Massachusetts Institute of Technology and his colleagues ran six tests on people in the street, to see whether the objects they were touching could influence judgements and decision-making.

In one test, passers-by were asked to judge a job candidate by looking at their résumé. Half were given the résumé on a heavy clipboard, the rest were handed it on a light clipboard. When asked to rate the seriousness of the candidate on a scale of 1 to 9, those with the heavy clipboard judged the candidate as more serious than those with the light (ScienceDOI: 10.1126/science.1189993).

In another task, volunteers who sat on a hard seat were less willing to change their price in a hypothetical car purchase than those sitting in a soft seat.

The authors suggest that our use of tactile concepts in metaphors that relate to behaviour, such as having a “rough” day or being “solid” as a rock, might influence our judgement: touching similar textures reminds us of their linguistic links to behaviour.

To find out where I sourced this article from, please click here.

Just the other morning, while sifting through articles on the New Scientist website that might be of interest to my studies, I couldn’t help but notice all the advertising that I was being bombarded with. And then, as if by chance, I stumbled upon an article, entitled “Unconscious Purchasing Urges Revealed By Brain Scans.” Talk about synchronicity!? As I have discussed once before in “Letting ‘Them’ Into Our Heads,” it seems that even simple exposure to retail ‘products’ – whether while reading a magazine, surfing the web, or even while viewing a film – can prompt our minds to automatically and/or unconsciously impart some sort of value onto them… Why do we do this? Possibly because of market conditioning… Don’t get me wrong. There isn’t some single minded, malicious man marketing everything that we don’t really need just for his personal gain, making us believe we really need it… Not at all! Rather, it’s just the system we’ve all created for ourselves to ‘benefit’ from, and now we’re almost blind to the fact that we don’t need it as much as we think we do.

Well… That’s obviously my humble opinion on it.

Unconscious Purchasing Urges Revealed By Brain Scans

You spend more time window shopping than you may realise. Whether someone intends to buy a product or not can be predicted from their brain activity – even when they are not consciously pondering their choices.

The ability to predict from brain scans alone what a person intends to buy, while leaving the potential buyer none the wiser, could bring much-needed rigour to efforts to meld marketing and neuroscience, says Brian Knutson, a neuroscientist at Stanford University in California who was not involved in the research.

NeuromarketingMovie Camera, as this field is known, has been employed by drug firms, Hollywood studios and even the Campbell Soup Company to sell their wares, despite little published proof of its effectiveness.

Rather than soup, John-Dylan Haynes at the Bernstein Center for Computational Neuroscience in Berlin, Germany, attempted to predict which cars people might unconsciously favour. To do so, he and colleague Anita Tusche used functional MRI to scan the brains of two groups of male volunteers, aged 24 to 32, while they were presented with images of a variety of cars.

One group was asked to rate their impressions of the vehicles, while the second performed a distracting visual task while cars were presented in the background. Each volunteer was then shown three cars and asked which they would prefer to buy.

First impressions

The researchers found that when volunteers first viewed the car that they would subsequently “buy”, specific patterns of brain activity could be seen in the brain’s medial prefrontal and insula cortices – areas that are all associated with preferences and emotion.

These patterns of activity reflected the volunteers’ subsequent purchasing choice nearly three-quarters of the time, whether or not the subjects had given their undivided attention to the images of the cars when they were first shown them.

Previous studies have shown similar patterns of activity when we makeexplicit purchasing choices. What this new study suggests is that these brain regions size up products even when we are not consciously making purchasing decisions. The brain appears to be imparting automatic or possibly even unconscious value onto products, as soon as you’re exposed to them, says Haynes.

Secret desires

While Knutson acknowledges that the volunteers’ choices might have been different if they had been making a real decision about which car to buy, he reckons the study may still be of use to neuromarketers – specifically as a subjective way of determining whether a consumer might buy a product or not, without having to be explicitly asked.

This kind of approach might be particularly useful for inferring people’s opinions of products they would be reluctant to admit to buying, says Haynes, although he emphasises that he is unwilling to promote neuromarketing for this purpose.

Journal reference: Journal of Neuroscience, DOI: 10.1523/jneurosci.0064-10-2010

by Ewen Callaway

However you want to take this… I’d certainly advise all of us to be more mindful of whatever we are exposed to, OR choose to expose ourselves to.

To find out where I sourced this article from, please click here.

And to find out more about Brian Knutson, an Associate Professor of Psychology & Neuroscience residing at Standford University, please click here.

Or to follow Ewen Callaway on Twitter, please click here.

change

Just the other day I was speaking to a friend about who we really were i.e. what defines ‘us’, what is real about ‘us’, what makes ‘us’ us… And after we’d finished discussing The Grand Delusion Of Self, he decided that it was definitely our body that defined us.

So came light the time period with which our cells replenish and replace themselves. I had no idea about the exact facts or figures, but I had heard that every cell in the body replaces itself at least once every seven years. But… As hearsay is nothing more than ‘scuttlebutt’ at the best of times, I decided to research this topic further. And, thus, I came across the following article in the New Scientist which decidedly covers the issue with a thoroughness that left me without any doubt that… Even though our bodies appear to be a solid structure of form and function that remain true, albeit with a bit of aging, for the rest of our life, they are certainly not as defining an aspect of ourselves as some of us would like think!? Why? Well… Even though I’ve been alive for 33 years here on Earth, my body is – on average – only 15 years old.

When we are presented with such undefinable aspects about the notion of our “self,” doesn’t it seem that we are sometimes overly prone to worrying about something which really not not exist? I mean, fair enough we have a need to survive and avoid certain death, for we are carrying the torch of Life for future generations; as an Olympian carries the flame from mount Olympus to start each Games with. But to obsess about ourselves; to worry about ourselves beyond reason… Well isn’t it missing the point of Life? Aren’t we really worrying about nothing? After all, we are nothing more than a collection of schemas/memes – ideas that originate from other people – that loosely add onto this framework of a body via the brain’s structure and ability; a body which is built from the star dust of ancient suns long extinguished, working on principles of chaos, weaving unpredictability into modes of ‘apparent’ understanding… An understanding that modifies itself all the time – via our constant study – into ever cosier comprehensions about the nature of reality and the beauty that guides it.

I mean, isn’t this uncertainty simply wonderful? For the first time it truly frees us from the confines of our own predefined humanity. It allows us to see that even WE – the predesignated arrangement of atoms that makes up our body, giving us substance in this world – are an uncertainty. I know this experience we are having seems pretty real i.e. “I” am really aware of the keyboard as my fingers type these words out on the keys in patterns of “QWERTY” order, and I can even interrelate these present experiences with past ones, and even calculate (with a fairly accurate estimation) about the chances of what might happen in the immediate future if I was to perform certain actions – like what would happen if I was to drive my bike at twenty miles per hour into the lake in the park… I’d go “SPLOSH!” and get rather wet, while ducks quack and fly off in all directions. BUT… Despite these amazing feats of organic supercomputing, our bodies and our memories are ever changing and ever shifting like the dunes of a great desert. We’re just not really aware of them ever changing (unless we are a Buddha)… Because we fuse a solid graspable concept, a notion of certainty, to something so uncertain, we delude ourselves continually and argue that our reality/existence – that certainty of “I” – with marginalised concepts that don’t really change enough.

Perhaps this is something we should all bear in mind… That, while we might feel solid and certain at many points in our lives, ‘WE’ really are as fickle as the dunes of the Sahara. As Nisargadatta Maharaj once said, “When you have seen the dream as a dream, you have done all that needs to be done.”

New Scientist NS Logo

Here’s a question: how old are you? Think carefully before you reply. It’s a lot trickier than you might imagine. The correct answer, it turns out, is about 15 and a half. According to recent research, that’s the average age of your body – your muscles and guts, anyway. You might think that you have been around since the day you were born, but most of your body is a lot younger.

That may come as no surprise. It’s a common belief that the human body completely renews itself every seven years, and though biologists would hesitate to put a firm figure on it most are happy to accept that cells eventually wear out and are replaced. In some tissues – skin and blood – we know how long it takes, for example from seeing how long transfused blood cells last. Surprisingly, however, we have no idea how often most cell types are replaced, if indeed they are replaced at all. Until a few months ago it was impossible to tell. Experiments on mice had hinted that some cells are replaced more often than others, but no one was sure how relevant the findings were to humans.

Now neurologist Jonas Frisén of the Karolinska Institute in Stockholm, Sweden, has invented an ingenious technique for determining the age of adult cells. He and others are using the technique to answer questions that have intrigued scientists and laypeople for decades: does cell turnover mean that you eventually renew your entire body? If so, how many bodies do you go through in a lifetime? If you live to a ripe old age, is there anything left of the original “you”? There’s more to it than curiosity value, though. The rate of cell turnover is a hot question in neuroscience and regenerative medicine, and may provide the key to treating numerous diseases and managing the effects of ageing.

Questions about the rates of cell renewal first arose about 100 years ago, when scientists discovered that most of our neurons are formed during fetal development and persist for life. Ever since, people have been wondering if the brain’s cerebral cortex – the seat of executive functions such as attention and decision-making – ever makes new cells. In the 1960s neurologists discovered that rodents and cats may make new neurons. Then in 1999 a study in Science caused great excitement with the claim that new growth had been found in the cerebral cortex of monkeys. Despite numerous attempts, however, the results have never been repeated.

Information about the lifespan of cells has historically come from experiments on rats and mice. The method involves giving the animals radioactive nucleotides, the building blocks of DNA, either in their food or by injection. The assumption is that if cell turnover is going on, new cells will incorporate labelled nucleotides into their DNA. Post-mortem tests can later reveal how much tagged DNA there is in various tissues and hence what proportion of cells were born during the animal’s exposure to the nucleotides. These experiments undoubtedly tell us about cell turnover rates in rodents but it is unclear whether the results can be extrapolated to humans. Because humans live for decades rather than months, we might have a greater need to replace our cells.

Feeding radioactive genetic material to humans, however, is clearly not on. Some researchers have attempted to date cells by other means such as measuring the lengths of telomeres, the DNA stubs on the end of chromosomes that shorten each time a cell divides. But no one has ever been able to develop a reliable method for reading age from telomere length. What’s worse, says Frisén, “some cells, such as stem cells, appear to be able to lengthen their telomeres, which would be a problem when trying to assess the cell’s age, especially in the brain”.

Frustrated with the lack of progress, Frisén decided there had to be another way. “My train of thought ran to the ancient Egyptian papyrus scrolls, which were carbon-dated, and I wondered if there was a way we could use that,” he says.

Carbon dating relies on measuring the amount of carbon-14 in a sample of organic matter. Carbon-14, a rare and weakly radioactive isotope of carbon, is continually produced in the atmosphere when neutrons generated by cosmic rays smash into nitrogen nuclei, stripping out a proton. Carbon-14 eventually decays back to nitrogen, with a half-life of 5730 years. But before it decays, carbon-14 can be taken up by plants during photosynthesis and converted into sugars. Animals eat the plants, and in this way all living things contain small amounts of carbon-14 – about 1 in a trillion carbon atoms in your body are carbon-14 rather than carbon-12. At death, however, the organism stops taking in carbon-14, and what it already contains eventually decays away.

That slow decay is what makes carbon dating of archaeological samples possible. By measuring the ratio of carbon-14 to carbon-12 in something that was once alive you can estimate when it died – up to 60,000 years ago, after which carbon-14 levels have fallen too much to be useful.

Slow decay, however, also makes the method fairly imprecise. An archaeological radiocarbon date is accurate only to between 30 and 100 years, depending on the age of the sample – fine for ancient Egyptian artefacts but useless for dating cells in a human body.

Frisén’s eureka moment arrived when he realised he could use carbon-14 in a different way thanks to a unique episode in recent history – the cold war arms race. Between 1955 and 1963, above-ground nuclear weapons tests loaded masses of carbon-14 into the atmosphere. At the peak of such tests in 1963, atmospheric levels of carbon-14 reached twice the normal background level (see Diagram below). This “bomb spike” was accurately recorded at locations all over the world, creating a unique window of opportunity that Frisén is now exploiting.

He reasoned that while most molecules in a cell are in a constant state of flux, DNA is very stable: when a cell is born it gets a set of chromosomes that stay with it throughout its life. Therefore the level of carbon-14 in a living cell’s DNA is directly proportional to the level in the atmosphere at the time it was born, minus a tiny amount lost to radioactive decay. Before 1955 that level was always roughly the same. But during the bomb spike, atmospheric levels rose and then fell again – and so did carbon-14 levels in cells’ DNA. What that meant, Frisén realised, is that he could take cells born after 1955, measure the proportion of carbon-14 in their DNA and then consult the bomb spike curve to obtain an estimate of their date of birth.

If Frisén was right, for the first time scientists would be able to work out the average age of cells in different parts of the body and, he hoped, finally settle the question of whether the brain makes new nerve cells.

Before he could start, Frisén needed to know how long the window of opportunity was open for. Ever since the 1963 partial test ban treaty, carbon-14 in the atmosphere has been declining steadily, halving every 11 years as it is absorbed by the oceans and biosphere. Even so, Frisén found that any cell born between 1955 and 1990 would contain enough extra carbon-14 in its DNA to give a reliable date, give or take a year or so.

Last year Frisén and his team reported preliminary tests on a few body tissues taken from cadavers of people who had been alive during the bomb spike (Cell, vol 122, p 133). They revealed for the first time how many different ages one human body can be.

The body’s front-line cells endure the roughest life, last the briefest time and are constantly replaced – these include the epithelial cells lining the gut (five days), the epidermal cells covering the skin’s surface (two weeks) and red blood cells (120 days).

Cells Frisén analysed from the rib muscles of people in their late 30s had an average age of 15.1 years, a similar lifespan to cells making up the body of the gut, which he found were around 15.9 years old on average. It seems our bodies are indeed in a constant state of breakdown and renewal – even the entire skeleton is replaced every few years, he says.

Exciting though these forays into uncharted territory were, Frisén was eager to get on with his original quest, working out the age of the cells in the brain. “I am a neurologist and that is where my love lies,” he explains.

“Of course I want to know how often our body cells are replaced – we will do it little by little, and I hope that experts in all those areas take on the research and help us. But I want to explore the areas of the brain and discover whether we generate new brain cells throughout our adult lives.”

The standard view from animal studies – and one man who agreed to have labelled nucleotides injected into his brain as he was dying from cancer – is that once the brain is formed, no new neurons are generated except in two areas: the hippocampus and a region around the ventricles.

Frisén first applied his new method to cells taken from the visual cortex. Here, as expected, the neurons turned out to be the same age as the person they came from – perhaps because they need to be wired up in a very stable way so that each time an object or colour is viewed it is perceived in the same way as before, he speculates. Cells in the cerebellum, which is involved in coordinating movement, turned out to be about 2.9 years younger on average than the person, which is consistent with the idea that this region continues to develop during infancy.

“We’ve now mapped the rest of the cortex and are well on our way with the hippocampus,” says Frisén. “So far, it doesn’t look like there are any new cells being formed in the cortex – they’re as old as you are. But some regions of the hippocampus are exciting – absolutely there’s neurogenesis.”

Medical Breakthoughs

Frisén isn’t just motivated by curiosity. He hopes that by uncovering the secrets of cell turnover in the brain, he can help shed light on diseases including depression and Alzheimer’s. In 2004, a team led by Rene Hen at Columbia University in New York demonstrated that mice appeared to become depressed if hippocampal stem cells were not making enough new neurons, and that drugs such as Prozac work by stimulating neurogenesis: when the team inhibited neurogenesis, the antidepressants stopped working (Science, vol 301, p 805).

Alzheimer’s, too, has been associated with a lack of neurogenesis in the hippocampus, and other brain disorders, including Parkinson’s, are linked to cell death not being balanced by adequate cell creation. Frisén’s group is now studying cell turnover in people with neurodegenerative diseases.

The brain is not the only organ where information on cell turnover may provide clues to treating disease. Knowing how frequently healthy people produce new fat cells, for example, could help treat obesity: at the moment nobody knows whether obesity is the result of having enlarged fat cells or a greater number of them. Similarly, understanding the normal turnover of liver cells – which animal studies suggest have a lifespan of 300 to 500 days – could help physicians spot abnormalities such as cancer. And understanding the cell turnover rates in the pancreas could eventually help us to manipulate the organ’s lifespan with a view to treating diabetes.

There are other possibilities too. Experts believe heart muscle cells are not renewed when they die, leaving gaps that are filled with fibrotic material, resulting in a gradual loss of cardiac function as we get older. But no one knows for sure. Frisén’s group has just started preparing some heart tissue for analysis to see whether heart muscle cells are ever renewed.

Meanwhile, a group at the University of California, Davis, led by Krishnan Nambiar, is using Frisén’s method to investigate the lens of the eye. Cells in the transparent inner part of the lens form five weeks into embryonic life and stay with you for life. New cells are generated from the periphery, where they build up and make the lens thicker and less flexible with age, sometimes leading to cataracts. “If we could learn more about the turnover of cells in the lens, we could perhaps learn how to delay the onset of cataracts for five years and make tremendous savings in the health budget,” explains Bruce Buchholz at the Lawrence Livermore National Laboratory, who uses atomic mass spectrometry to carry out the carbon-14 analysis of Nambiar and Frisén’s samples.

It is clear, then, that a large proportion of your body is significantly younger than you are, and that raises a paradox. If your skin, for example, is so young, why don’t you retain a smooth complexion even into old age? Why can’t a 60-year-old woman, with her youthful muscle cells, flick-flack across the floor with the acrobatic agility of a 10-year-old girl?

The answer lies with mitochondrial DNA. This accumulates mutations at a faster rate than DNA in the nucleus. As soon as you are born, your mitochondria start taking hits – and there is nothing much you can do about it. So while your cells may be only a third as old as you are, the snag is that your mitochondria are the same age. In skin, for instance, mitochondrial mutations are thought to be responsible for the gradual loss in the quality of collagen, the skin’s scaffolding, which is why skin loses its shape and forms wrinkles.

There is good news, however. If we ever find ways to protect or repair mitochondrial DNA – and there are many ideas for how to do so – the discovery that most of our cells are younger than we are means that we could significantly delay ageing. Perhaps in the future people really will struggle to answer the question “How old are you?”

written by Gaia Vince

To find out more about Gaia Vince, please click here.

OR to follow her on Twitter, please click here.

And to find out where I sourced this article from, please click here.

While going through the New Scientist‘s website this morning, I stumbled over an interesting article that eluded to the nature of human social interaction, and how scientists are beginning to deeply see through the way the human brain/mind complex develops social constructs that allow us to come together within imagined civilized structures. No doubt, if you have read Susan Blackmore’s “The Meme Machine” then you will probably have noticed how this ties in beautifully with what Blackmore discusses towards the end of her brilliant critique i.e. that all altruistic behavior is naturally selected for within evolution’s flow… And that the resulting interactions are probably better “described as a single, complex system rather than as two systems interacting.”

No doubt there are parallels between what David Bohm spoke about in the last blog… Parallels running between aspects of the evolution of all Life here on Earth i.e. from the abundance of single celled organisms that started Life out here on the Earth (namely the Stromatolites), who, through natural selection, discovered a process of coming together, whereby single cells became parts of a larger “Whole” that is better suited to function as a more versatile, self-sustaining, single minded, complementary system working together, rather than as many tiny individual systems competing with one another for resources.

The power and beauty of simple analogies bringing into focus the brilliant aspect of self-similarity existing within this universal structure harks of a universal fractal dynamic… A sort of Indra’s Net of cause and effect that spirals into galaxies, which in turn are built from suns, forging the varied atomic dust of material which give rise to planets, some of which have the fortunate chance to bestow new notions of understanding to the atomic arrangements that manage to become self-aware, discovering their own Life and the varying aspects of awareness that this pattern of existence brings.

But I digress…

YOU know how it works. A student volunteer sits alone in a soundproof booth, watching a computer screen and waiting for moving dots to appear. When they do, he or she has to decide whether there is a walking man hidden somewhere in those dots. If there is, and he is walking left, the volunteer has to press the left button. It’s a tricky task, and most of the time people end up guessing.

In our view, this kind of traditional experiment has a serious limitation: it does not take into account the influence of social interaction. On the surface, of course, no social communication is involved, as the volunteer is alone in a room. But dig deeper, and you’ll find plenty. For one thing, the man hidden in the dots is a social stimulus, although not one that can interact. Such experiments involve social communication at another level, too. Any participant brings his or her baggage about what psychologists are like and how volunteers should behave.

The problem is that these hidden social interactions remain out of focus in the experiment. Our aim at the Interacting Minds project at the Danish Neuroscience Centre in Aarhus is to develop a new kind of experiment that is focused on such interactions.

continued here

If you would like to see where I sourced this article from, please click here.

Follow

Get every new post delivered to your Inbox.

Join 38 other followers