Question: I have been reading lots of material about flax seed and flax oil.They are starting to get as much attention in the field of natural foods as soy. What is your take on flax?

Answer: Like soy, flax is gaining ground as the next “great thing” you can do for your health. And again like soy, much of the scientific research on flax and flax oil comes from similar sources, “agenda driven” science funded by special interest groups intent on saturating the market place with products that are easily mass produced and marketed with inflated prices. Much of the research on flax oil, (the oil extracted from flax seed or linseed), is focused on the high amounts it contains of what are called omega-3 fatty acids.

Omega-3 fatty acids (alpha linolenic acid) are essential fatty acids that cannot be made by the body and therefore must be provided by diet. These essential fatty acid precursors are converted in the body to EPA and DHA, both of which are supportive of hormone-like substances known as prostaglandins, which are vital to the regulation of metabolism and other regulatory functions. Refining and hydrogenation destroy the vital omega-3 oils leaving modern day consumers with high omega-6 fatty acids and little of the omega 3s so important to biological functions.

Some experienced health practitioners believe flax and flax oil to be the ultimate panaceas for many problems. Others feel the tiny seed is simply another “band aid” approach to health and that it would be wise to seek more reliable sources of omega-3 fats available in fish liver oils that, unlike flax oil, is readily converted to a usable form by the human body. Fish along with wild and naturally raised animal meats and eggs were traditional sources of Omega-3 for thousands of years.
Speaking of thousands of years lets go back a bit and consider some history on flax. There are several commonly expressed opinions and statements from both researchers and lay people pertaining to the historical relevance of flax. These same statements are used to support the idea that consuming flax is beneficial to health. Let’s take a look at some of these statements.

“Flax has been used for five thousand years in Egypt, Mesopotamia, China and numerous other places throughout the world.”

“Flax oil was used in ancient Egypt since the time of the Pharaohs.”

“Flax was once a staple food of the Roman Empire.”

These and many other statements along with the latest scientific research is enough to convince even the most skeptical of us that we are indeed missing out on something if we are not consuming flax in some form or another.

While it is true that flax has been in use by ancient cultures for thousands of years and was used extensively throughout the world in textile manufacturing, as a base for paint and for preserving wood; it was not a principle food of any culture. Evidence does exist, however, to show it has been used as a supplemental food when traditional foods were scarce due to adverse climactic conditions and other destabilizing influences on some cultures. As a food, flax may have been used as a last resort in times of famine, as is the case in Ethiopia and a few other places of the world during drought when grain crops were compromised. More recently, in the last few hundred years or so, ground flax, boiled in water, was used as both an internal remedy for colds, coughs and urinary irritation, as well as a medicinal poultice applied externally for boils and abscesses. Therefore, while flax seed does have some limited use as a food in recent history, flax oil, on the other hand has no historical use as a food.

In Food in Antiquity, a survey of the diet of early peoples by Brothwell and Brothwell we read:
“…in Mesopotamia and Egypt, though it may have been more valued as a textile material than for its oil potential. In Europe, however, the picture is different. Flax was grown in Neolithic Spain, Holland and England, and the Swiss prehistoric lake dwellings have yielded seeds from the beginning of the third millennium BC. A sort of linseed cake was found at Robenhousen…” This “sort of linseed cake” is certainly not a cake made from linseeds and does not mean it was used as a Food. Pressing seeds onto balls is a common traditional method of oil extraction. The compressing of seeds into a rounded shape was a way of preparing seeds for grinding by hand with a rock to extract the oil which could then be used to cure wood, leather or rope and for mixing to form a highly absorbent and drying paint. Eating whole flax seed pressed into a patty of some sort would cause serious digestive distress and is not something any intelligent prehistoric human being would do more than once, if that, so these “cakes” certainly were not food. Neither was the oil used for human consumption because it is highly unstable and traditional peoples did not have refrigerators, a technology needed to prevent the fast rate of rancidity flax oil goes through when exposed to oxygen.

Carbonized flax together with cameline seed have been found at a Roman Iron Age site in Denmark. Both of these seeds yield high oil content and it seems as if they were grown together for this purpose. At the same site in Denmark was found what resembled a “cake” of poppy seeds, also suspected to be another source of oil. The same source (Food In Antiquity) also states:
“None of these oil/producing plants can be said to have had any great significance as food but in the case of the olive…”

While these seeds (flax and poppy) were grown for their oil, it is highly unlikely the oil was consumed as a food as both oils are of a highly unstable nature and not suitable for human consumption by crude methods of extraction. Many scientific studies on omega 3 fatty acids are used by proponents of flax oil to encourage the belief that flax oil, because of its extensive use in paint and other industrial uses for thousands of years, somehow means it was also consumed as a food for thousands of years, the same goes for the seeds. This couldn’t be further from the truth.

All right, let’s indulge ourselves for a moment and say yes, it is true: flax formed an integral part of some ancient peoples diets. Was this a good thing? If current science is correct when it stresses the importance of eating raw unheated flax oil because it cannot withstand the high temperatures of cooking, or that flax oil should be refrigerated to prevent rapid deterioration, or that a certain toxic chemical in flax seed can only be neutralized through cooking or processing of some sort… One would be hard pressed to believe that the ancient Chinese, Indians and Egyptians with thousands of years of wise traditions in food and medicine unwittingly created health hazards for themselves by cooking their food with flax oil or eating raw flax seeds. Additionally, there is no reason to believe they would consider consuming raw such foul tasting oil when other healthy and tasty fats and oils were prevalent and essential worldwide staple foods. Storing and preserving flax oil for human consumption was not an option for traditional peoples. There is no evidence for cooking with flax oil either. The only evidence for the use of flax oil in a traditional setting is for purposes other than human consumption.

Historical evidence is always subject to interpretation and when modern scientific methods are applied to the evidence, we find, in the case of flax oil, little reason to consume it among traditional peoples due to its harmful qualities when not properly handled with modern technologies.

A quick read on the can of linseed oil found in your local hardware store describes the way linseed oil (flax) heats as it dries and how it deeply penetrates wood. This is in reference to applying the oil to wood in order to preserve it. The can also says it contains 100% linseed oil and “Danger, harmful if swallowed!” Solvent extraction for commercial linseed oil removes much of the antioxidants that encourage rancidity thus helping to stabilize the polyunsaturated fatty acids. However, when these fatty acids are not protected they produce gummy plastic-like residues called polymers, toxic substances unfit for human consumption and have destabilizing effects on cell walls as they cause blood cells to clump together. Modern “food grade” flax oil is now said to be “cold pressed,” a term that, according to most experts, simply does not exist since some heat is generated through the grinding of seeds, especially hard seeds the likes of flax. Any heating or exposure to air of this highly unstable plant oil causes oxidation, which leads to high free radical production when it is consumed.

While the majority of research on flax oil appears to be positive in nature, it is important to read between the lines of these studies. Some other studies do not support the trend that ‘flax oil is good for human consumption.’ Through his research at the University of Virginia Medical School in Charlottesville, Dr. Charles Myers found that flax seed oil increased the growth of prostate cancer cells by 300% leading Dr. Myers and associates to proclaim flax oil to be the most powerful stimulant they know for prostate cancer cells.

This of course is in direct contrast to studies that have shown flax oil to be a strong immune “stimulant” and thus helpful for cancer. Because a substance stimulates the immune system does not mean it is good for it. A healthy substance is more likely to enhance immunity or support immunity, not stimulate it. Viruses and other interfering organisms tend to stimulate immune response. Therefore, the language used in scientific studies is not always what it means or for that matter, what it seems. Another example of this shows up in studies where flax oil proved to be effective in lowering total body cholesterol. Well, there are two types of cholesterol, a good one and a bad one. Flax oil was shown to have lowered both. This not a good thing but it looks like it is the way it has been presented.

Another study from Denmark at the Clinical Chemistry Department of Aalborg Hospital compared the effects of cod liver oil and flax seed oil on the EPA content of blood fats. After one week, the cod liver oil showed a tenfold increase in the EPA content and the flax oil showed only insignificant increases.

The lignans found in flax oil should be another concern for many people; these steroid like compounds may not be as healthy as we have been led to believe, especially at the levels found in both flax and soy. Sesamin, (a lignan with strong antioxidant properties found in sesame seeds) or the lignans found in vegetables, nuts, and grains are different from the toxic lignans found in flax. A little research into plant lignans can reveal some interesting yet disturbing information. Apparently, there are about 450 types of known lignans. Just what are these lignans? Lignans are often described as “potent naturally occurring substances with many toxic side effects.” One could argue that many plants we consume may have varying degrees of toxic side effects due to lignans. True, but flax is recognized as being the highest source of plant lignans, up to one hundred times more than other sources. This is something the scientific literature promotes as a good thing. When these lignans are converted in the body to mammalian lignans they have estrogen like and anti-estrogen effects. In turn, these compounds, touted and praised by the latest scientific literature, are similar to those found in soy. These estrogenic compounds, contrary to what is being promoted, are not beneficial to health.

Then there are the anti nutrients linatine and cyanogenic glycosides found in flax. Linatine is a vitamin B 6 antagonist. Flax also contains a cyanide containing glucoside called linamarin. This substance releases hydrogen cyanide under moist and acidic conditions (inside your body). Normally, when processed with chemical solvents and high temperatures the enzyme linase is destroyed so it is not a problem, but this is not the case with the cold/expeller pressed flax oil or raw flax seeds preferred by consumers. This same toxic substance can be found in lima beans and the cassava plant, both used as foods among indigenous peoples for thousands of years. However, both of these foods are processed by soaking and cooking before being consumed, not the case with flax seeds or flax oil.

It is currently suggested by nutritional experts that 1 to 2 tablespoons of flax oil be used daily if used at all. Why the limitations? These suggestions do not apply to olive oil, coconut oil, or animal fats all of which have been used abundantly throughout the ancient world and up to the present. Being in the field of Natural Health, I counsel people from all lifestyles. I have found that people attempting to follow natural diets, especially Vegans and Vegetarians, generally do not consume flax oil or flax seeds in small amounts as suggested, rather, like their often excessive consumption of soy products, canola oil and soy oil—there is a tendency toward more is better. Could this become a problem of toxicity for many people or can we rationalize it as acceptable because flax oil contains omega 3s, lignans and other isolated components that supposedly have remarkable health benefits?

Moreover, in spite of great effort by some courageous people to bring to public awareness the detrimental effects of trans fats, many people who consume flax oil are still unaware of this problem. Trans fats interfere with the conversion of EFAs in the body. So, are the many people who consume large amounts of trans fats wasting their time taking flax oil for the omega 3 fatty acids? If saturated fats including animal fats, are important to the conversion process of the EFAs found in flax oil: are the people who avoid these fats or those who are on low fat diets wasting money and time on flax oil? Many vegans, raw fooders and vegetarians rely on flax oil as their primary source of omega 3s. These same people tend to avoid saturated fats (important factors in assisting in the conversion process of EFAs) while at the same time many of these people consume trans fats (known to inhibit the conversion process of EFAs) in the form of hydrogenated or partially hydrogenated oils in margarine and other foods. Does this defeat the purpose of consuming flax oil all together if either one of these factors are part of ones chosen lifestyle? The science would seem to indicate this is so.

Teas and medicines are one thing but using flax seed and flax oil as a regular staple food does raise many questions. Perhaps what is needed is a manual listing all the dos and don’ts required to insure the absorption and assimilation of flax and flax oil in order to reap the supposed benefits.

Apparently at one point the FDA had initial concerns about the hydrogen cyanide found in flax seeds but later the agency stated there was no concern. Now that is reassuring, an FDA sanction. Most health conscious people are aware of the lack of consistency with this agency and how the influence of corporations and private interest groups can be, and has been, used to sway their decisions about food and ingredients.

I am not a scientist but I am an observant witness and am fully aware of how science from one source can be used to refute science from another source, especially in the field of nutrition. A case in point is the ongoing soy debate. Both sides use scientific studies to back their causes and for the person with little experience in how scientific studies can be bought by large corporations, it can be very confusing. For me, with 25 years of counseling and teaching experience in the field of natural health, I have found that regular use of non-traditional soy products and flax has done more harm than good for people seeking to improve their health. Unfortunately, with these examples anyway, the time-tested experiences of consumers far outweigh the scientific theories.

Traditional foods containing omega 3 fatty acids include many dark leafy greens, wild freshwater micro algae, sardines, anchovies, walnuts, wild salmon, free-range eggs, pasture-raised beef and numerous other natural, unprocessed foods. Flax oil contains approximately 60% omega 3 fatty acids. Sardine oil and anchovy oil contain about half as much, but traditional peoples ate the whole fish, not just the oil. All of the other foods mentioned have low percentages of omega 3s and it is the combination of omega 3s with the other ingredients in these nutrient-dense foods that once supplied us with balanced nutrition. That no natural food source contains the levels of omega 3s found in flax oil is good reason to question the validity of it. While omega 3 fatty acids are essential they are also highly perishable and unstable, especially flax oil. Western nutrition’s fostering of the “more is better” theory has done little to improve the overall health of people and flax oil is another example of taking an isolated ingredient, promoting the quantity of it and making it look like more than it truly is.

Who eats flax seeds and flax oil? While the food industry is poised to position flax where soy now rules as king, “naturalists” represent the majority of people who are consuming both flax seeds and flax oil. Among natural food consumers, flax oil is the new kid on the block, but flax seed has been around for at least forty years, albeit in small alternative lifestyle groups. In America, flax seed was, and still is, primarily used as a laxative by those suffering with chronic constipation. Most of the people who use flax grind the seeds and mix it with water or their favorite beverage for the sole purpose of relieving constipation, and for the most part, it works. However, in my experience with clients using flax seeds for this problem, I have found that 10 and even 20 years later this “laxative” has done little more than temporarily relieve the symptoms of constipation while increasing digestive distress and creating a dependency on flax seeds.

Having had the opportunity to observe the effects of flax oil on clients for some years now I have found a few tell tale signs that reveal the adverse effects of this product on human health. If you experience these symptoms or have any of these signs, you might want to consider an alternative to flax oil. Alternatives are fish oil, krill oil and of course, a healthy balanced diet with traditional foods that contain ample amounts of omega-3 fats.

Symptoms that often manifest after consuming flax oil for a week or longer:

a. The appearance of bruises. These may appear anywhere on the body especially the arms and legs with no history of impact to the area. They just appear.

b. Sudden appearance of brown spots on the face and hands that do not go away. These are often called liver spots. It has been suggested that this is a sign of free radical damage from highly perishable polyunsaturated flax oil.

c. Intermittent periods of nausea.

d. Loss of mobility and spastic movements.

e. Muscle cramping, especially the calves of the legs.

f. Increased signs of aging including wrinkles and flaccid muscle tone.

Of all the great civilizations of antiquity that reaped the many benefits of the flax plant—a plant of such importance that it helped to define cultures the world over with style through clothing and art (paint and ink)—why did they not also consider it a daily food if it was so “good” for them?

What About Calcium?

September 13, 2004

Question: If I am eating a healthy diet, do I need to take extra calcium supplements or drink more milk to prevent potential bone problems, as I get older?

Answer: Calcium, like Vitamin C, B6 and other isolated nutrients becomes an issue of nutritional concern when denatured foods are consumed in quantities that exceed wholesome natural foods and when there is an extreme imbalance among the basic macronutrients of carbohydrates, fats and proteins in one’s diet. When we consume large quantities of denatured foods (foods that are highly refined, processed and laden with preservatives) we will inevitably develop nutritional deficiencies. Nutritionally empty foods adversely affect metabolism, deplete enzyme and mineral reserves and contribute to numerous more specific deficiencies—nutritional voids that must be filled in order to regain nutritional balance.

The popular response to filling these nutritional voids is to saturate them with vitamin and mineral supplements. Many of the supplements available however are poor quality sources of synthetic nutrients that are marketed in the form of vitamins, minerals, phytonutrients, anti-oxidants etc…sometimes, real foods with added vitamins are suggested also by nutritionists.

Taking calcium supplements without addressing the fundamental cause of the problem cannot solve the problem without creating more problems. The reason for this is that the more calcium supplements one takes the more magnesium is needed to balance the calcium. And it doesn’t stop there. Additional nutritional factors are needed to balance the combination of calcium and magnesium. Fats play a major role in the absorption of calcium too, so a healthy source of fats needs to be considered, and fats are dependent on protein, carbohydrate and other nutrients to be adequately absorbed and assimilated. So begins a vicious cycle of left brained linear thinking that ultimately leads to confusion and ultimately, an unresolved problem. Unfortunately, there isn’t a specific pill we can take to solve our nutritional problems. However, there are numerous pills we can take to help us forget, ignore and avoid dealing with these problems. These are being promoted twenty four seven on TV, in magazines and just about everywhere we look. Even with their often-lengthy lists of side effects and contraindications, there is no mention as to how the massive industry of pharmaceuticals is contributing to the depletion of our nutritional reserves.

Responding to the “calcium need” from the Energetics perspective we begin with the first governing law of Food Energetics, Quality and Quantity. Before attempting to fill the calcium deficiency void we first need to look at what is causing the void. Having established that denatured foods are devoid of vital minerals, including those foods containing synthetic vitamins added to enhance the products—and that these foods also have been shown to deplete minerals and other nutrients from the body—we then suggest better quality foods over all. From here, we then establish a sound dietary base, free from extreme points of view and grounded in common sense, history and tradition.

The next thing to consider is the quality of the varied sources of calcium available to us. Milk is the food commonly recognized and accepted as a source of calcium by most people as well as the food choice most often recommended by nutritionists for combating the calcium deficiency problem. Even though the food most often suggested by the “experts” as a source of calcium is milk, in its modern pasteurized, homogenized and anti-biotic saturated form; milk is devoid of enzymes and other vital substances that support the bioavailability of the nutrition it has to offer. Few people have access to raw milk and even if they did many are lactose intolerant. Another consideration with milk is the fact that cows milk contains approximately 82% casein and 18% whey whereas human breast milk contains approximately 40% casein and 60% whey. Casein proteins coagulate to form solid clumps while whey proteins tend to remain suspended in liquid. Casein is a bonding agent often used to make glue and other adhesives. With these unbalanced levels of whey and casein between human and cows milk it should be cause for wonder how bonded we might want to be to the source of the milk we choose to drink.

Cows milk is also very acidic with an average PH of 6.5. There are many other issues concerning milk to consider but the most important is that historically cows milk was rarely used as a beverage and when it was it was consumed raw. For the most part, among agricultural peoples milk was naturally processed and cultured to make suitable foods for human consumption. These include butter, cheese, yogurt, buttermilk etc. Even these foods are foods that should be consumed with appropriate accompaniments to insure proper digestion and assimilation. For example, wine, bread, apples, pears, olives, capers and a variety of fermented vegetables traditionally accompanied cheese…and you know about bread and butter. Other animal milks too have been sources of calcium. These include sheep and goat milks and products made from them, all of which have different effects from cow milk and its products. When I say different, I am not implying better. Rather, each milk-producing animal has its own unique qualities. Meeting our nutritional needs is a complementary process, a process that was worked out thousands of years ago through ancient wisdom and common sense.

While raw, naturally processed dairy products from grazing livestock have numerous redeeming qualities for many people, for many others, on a qualitative scale and due to traditions where it wasn’t a primary food, milk ranks low as a source of daily calcium. Other quality sources of calcium are nuts, seeds, dark leafy green vegetables, and some varieties of marine algae (seaweed). The difference between these and other whole foods and a calcium supplement is the synergy of all the other healthy ingredients involved in the whole foods when they are properly prepared and consumed.

For those unable to consume milk, due to lactose intolerance or allergies, the combination of healthy fats, proteins and carbohydrates found in seeds and nuts make them high quality traditional food choices for fulfilling calcium needs. Kale, collard greens, mustard greens, dandelion greens and other dark green vegetables also are great sources of calcium and they contain other vital, supportive and complementary nutrients as well. The bioavailability of the calcium in these green plants is enhanced when they are braised, sautéed, or stir fried with coconut, palm, olive oil or butter. These sources nutrition, with or without dairy products, when consumed on a regular basis can assist in solving the calcium issue along with several other nutritional deficiencies for many people. For many traditional peoples these foods are their primary sources of calcium and they have thrived on them for many generations. For others, raw, naturally processed dairy products from cows, goats, sheep and other animals have been primary sources of calcium for many generations. It is certainly ideal if we can consume a combination of these traditional foods without adverse reactions however, if not, we must choose a variety of the foods we are comfortable with before considering supplements.

In summary then, we can say, first find the reason for the calcium deficiency. Being a mineral, Calcium can be easily depleted from our bodies through the excess consumption of pharmaceuticals, sugar, coffee, tea, and denatured foods in general. Next, include more high quality food sources that contain natural calcium. This will be beneficial in several ways because these foods have additional nutrients that complement the absorption and assimilation of calcium and they are excellent and essential sources of nutrition to a healthy diet anyway. It is often said that calcium and magnesium work together in our cells. They are dependent on each other.

This is true and it is also true that both of these substances are dependent on fats, proteins, carbohydrates and other nutrition sources that synergistically work together to support a healthy human organism.

Understanding Food Energetics

September 13, 2004

The study of Food Energetics begins with a detailed look at what you are eating on a daily basis. Practical considerations of what you bring to your kitchen for nourishment is the first big step in discovering your basic individual dietary needs through food energetics. While calorie contents and vitamin percentages do add important meaning to your understanding of foods, there are many processed junk foods we humans put in our mouths and swallow that contain added nutrients in abundance and yet these consumables have little to do with natural nourishing foods. Trying to understand the essence of your food through nutritional analysis alone is like trying to understand the current US political agenda using mainstream media spin on the subject as your only source of information. It is important to look deeper through alternative ideas and opinions to find those hidden gems of knowledge and truth. This is where understanding food energetics comes in with your study of food and nutrition, as a reliable alternative source of information and knowledge that can help you make important decisions about how to nourish yourself. While not a new way of understanding food, having been around for thousands of years, it is of great importance if we are to continue surviving on this planet as a healthy species.

Your body is in a continuous process of change and adjustment and as a result needs to be nourished accordingly. This can be done most effectively through an open-minded inquiry into what you eat, digest, absorb, and assimilate.

A teenager has different dietary needs than a 3 year old or even a 50 year old. A construction worker and an office worker have different dietary needs too. However, there are some basic needs we all have in common. One of the most important is good nutrition through healthy foods. Not everyone wants to eat healthy nutritious foods but that does not mean they do not need them.
Granted, one’s individual nourishment needs extend beyond mere physical edibles, however when your physical foods are truly nourishing to your body you will have a solid foundation in health from which to choose how you are nourished in your other relations. Understanding food energetics will give you the opportunity to make sound and practical choices that can change the way you think about food in ways that will help answer many of your diet related questions and open doors to new and interesting discoveries. Now let’s take a look in that kitchen by using a few guiding principles of food energetics.

Three Essential Principles

Any good theory comes with a set of principles that act as guides in order to assist you in your exploration and understanding of the theory. Here are three basic principles essential for understanding food energetics.

Food Quality

Health articles permeate magazines and TV with headlines like “A low fat diet may prevent cancer and heart disease.” What articles like these rarely if ever tell you is how important quality is. In this titled article for example, natural traditional fats and oils have never been implicated in heart disease and cancer to begin but that will not even be mentioned in the lengthy article. By not mentioning this important qualitative detail, the reader can easily be lead to believe that all fats are created equal and therefore problematic. Considering the majority of studies on the detrimental effects of fats are based on research conducted on hydrogenated animal fats and vegetable oils and not on the effects of traditional non-hydrogenated fats it is no wonder people get confused. In fact current research on traditionally used and naturally processed fats show just the opposite. So, is it true that a low fat diet can prevent cancer and heart disease? Perhaps it is more like, “A diet with little to no trans-fats and processed polyunsaturated oils can prevent cancer and heart disease.” Better yet, “A healthy varied diet with moderate amounts of good quality fats and oils may help to prevent cancer and heart disease.”

An interesting example of misunderstood quality in fats can be found in the flax oil phenomena. There have been numerous scientific studies touting flax oil as a panacea for a number of ills. These studies tout the extremely high levels of omega-3 fatty acids in flax oil and simply because they are scientific studies they are generally accepted by most people as truth without stopping to question how those scientific conclusions were reached. Much can be garnered by how scientific studies are worded and by who funds them. For example, scientific reports state that flax oil “lowers total body cholesterol.” Does one really want to lower total body cholesterol, both the good and bad cholesterol? This is generally not a good idea. Other claims for flax oil include that it “has been shown to destroy cancer cells and that it stimulates the immune system.” How does it Does it destroys cancer cells? Does it do it in a similar way that chemotherapy kills cancer cells? Is also stimulates the immune system. Is this a good thing? Viruses and many toxins have been shown to stimulate the immune system too. This is very different from a substance that supports the immune system. Many healthy foods and herbs have been shown to support immunity without stimulating it.

More can be said about this oil but I will not get into it here due to time and space. Here are a few health assessment questions you could ask anyone who consumes flax oil on a regular basis. Do you or have you:

1. experience intermittent bouts of nausea throughout the day?
2. experience cramping in the leg muscles, especially the calves and feet, during sleep?
3. have an increase of dark spots (often called liver spots) appearing on your face, hands, arms or back?
4. experience dizzy spells and lack of mental clarity with the slightest physical exertion?
5. experienced a loss of muscle tone (flaccid muscles) and an inability to build and tone muscle through regular workouts?
6. noticed an increase in the number of wrinkles on your face and other parts of your body that indicate rapid aging beyond your years?
7. experienced an increase in spastic movements and lack of coordination and mobility?

Granted, many of these symptoms can be attributed to other problems but the consistency of occurrence with 5 to 7 of these symptoms is not uncommon among those who consume flax oil. Now, apply the same questions to those who consume fish oils also rich in omega-3s, omega-3s easily converted by the body into usable nutrition. You will not get the same response. Why? It is a quality issue. Although people have not eaten capsules of fish oil for thousands of years, they have consumed fish high in omega-3 fatty acids along with numerous other natural sources of omega-3s as well. The same cannot be said of flax oil. I use the above example to demonstrate how easily it is to loose sight of quality when other prevailing nutritional factors are evident. Quality should always come first with our food and a cash cow with science to back it up for the natural food industry does not mean it is there to support your health. Another example is the fledgling soy industry. Traditionally made miso, natto, tempeh, and shoyu are qualitatively very different from the soy junk foods flooding the natural marketplace.

The quality of our food is determined by weighing the difference between the naturalness of it and the artificial. By studying naturally grown and naturally raised foods, we become aware of the true nature of the food in question. Foods grown with chemicals and preservatives or those with added hormones or genetically altered represent the lowest quality foods available for human consumption.

Anyone with a desire to understand the nature of food first needs to make quality the first rule of thumb. Unfortunately, this is not often the case with conventional nutritional studies where the measuring of nutrients often takes precedence over quality. This form of food analysis, while useful, often leads to categorizing foods in groups that can lead to confusion. A nutritional analysis of a commercial breakfast cereal with added synthetic vitamins may look the same or even better than the nutritional profile of an organic version of the actual whole grain that forms the basis of that breakfast cereal. Both are carbohydrates yet are qualitatively very different from each other.

The same is true for naturally raised grass fed livestock and factory-farmed livestock. The former is raised in a natural environment eating their natural grass diet while the latter is raised in confinement, often consuming foods unnatural to the species and further enhanced with hormones and chemicals. While both animals may look alike, when examined under a microscope for nutritional data they reveal differences in fat levels and other nutrient levels making them qualitatively very different.

The dietary choices we make based on nutritional analysis will be greatly enhanced through personal health benefits when quality is considered first. While a cursory examination between superior quality, naturally grown foods compared to their inferior counterparts may show in some cases some comparative nutritional profiles, it has been proven through scientific studies that quality foods have higher ratios of nutrients in general in addition to little or no harmful chemicals and preservatives. Understanding the effects on our health of inferior foods laced with toxic chemicals and artificial ingredients is obvious and it is this obviousness of quality that often leads well intentioned proponents of natural foods to inaccurately demonize particular foods yet understanding the same foods grown naturally reveals a greater comprehension of what these foods are as sources of nourishment.

The study of food quality leads us to the basic fundamental truth that the foundation of any truly healthy diet must be comprised of naturally grown and naturally raised foods.

History and Origin.

The history and origin of a particular food is another essential principle in the quest for understanding food energetics. How long has the food been consumed by humans? Where did the food in question originate and how?

Sweet potatoes have been grown and consumed by Asian and South American peoples for thousands of years in spite of the fact that these different cultures they are thousands of miles from each other. The same is true for peanuts. Chickens have provided nourishment for Asians and Middle Eastern peoples for 7 to 10 thousand years. Whole grains have been essential foods for 20 thousand years or more. In fact the very same unprocessed foods we consume today have been the mainstay of the human diet for at least 10 thousand years with the primary changes being in quality over those many decades. In fact, most of these food changes have occurred in only last two hundred years. Aside from offshoots through grafting and crossing of plant species to produce new vegetables, there have been few additions to human foods in over 10 thousand years.

Why is that? Could it be because there has been no need to alter these highly nourishing ancestral foods? In the coming years we will likely see more foods introduced as a result of genetic manipulation but for the most part we are still dependent on our ancestral foods as nourishment for the masses.

Now, one could argue that just because a food has been consumed for thousands of years does not mean it is good for you. This statement has little relevance when considering that diets are by design, all of them, modern and ancestral diets. The foods in any diet are designed for a particular agenda whether they are for health, spirituality, emotional stability, strength…with some being multifaceted. Ancestral foods have a long track record of nourishing the human species and all ancestral foods without exception can be proven to be nourishing through nutritional analysis as well.

It really has nothing to do with good or bad for us until we bring our personal designer diet into the picture. It is then that we began to define a food as good or bad and began to remove it from or add it to our approved foods list. It is one thing to say you do not want to eat a chicken because you do not believe in eating animals but attempting to rationalize not eating a chicken because it is a bad or unhealthy food to eat is not only inaccurate, it is absurd. Alternatively, you might try to rationalize not eating chicken because you think you can get the same protein from other sources. This too is inaccurate. Sure, you can get protein from different sources but the chicken has its own unique profile of proteins and other nutrients that makes it a chicken and nothing else. No other food has the same profile of nutrition that a chicken has. The same goes for a cow or any source of plant protein. This in no way means one is better than the other as a food, just different. Comparative nutrition profiling is not only inaccurate; it can be misleading when trying to understand food energetics. The very purpose of studying food energetics is to get to know the uniqueness of each food not to debase that uniqueness by comparing some of its isolated nutrients to something else equally unique in its own way simply because it has some of the same nutrients.

Then there is the question of location, where a food originated and how that fits into your diet. Historically, foods grown in one part of the world have traveled to the other side of the world through trade and commerce. This has been going on longer than most historians would like to believe and there is plenty of evidence to support it. Does eating from your environment mean not eating spices from Thailand if you live in North America? How about someone living in the mountains thousands of miles from the sea, should they not eat fish or sea vegetables since those foods are not part of their environment? Only you can put limitations on the size of your food environment. You can choose to limit your diet to foods grown from as far away as 1000 miles in any direction or you could choose to include foods from many thousands of miles in any direction. Which choice do you think offers the best options for a healthy balanced diet? A good axiom to uphold when choosing foods for a balanced whole foods diet is: Support locally, eat globally.

As humans, we have the capacity to eat anything as evidenced by the tremendous quantities of junk foods consumed by humans. Is it reasonable to assume that someone living in a non-tropical environment consuming coconut oil from the tropics is going to become mentally unbalanced or develop health problems from eating a food that does not come from his environment? Could one actually become dislocated in time and space due to this food relationship? And, if so would that be so bad? Coconut oil has a four year shelf life at room temperature, is one of the few heat stable plant oils for cooking and like the coconut from which it is derived it is loaded with all kinds of health supporting nutrition. Based on nutritional propaganda pertaining to saturated fats one would think that coconut oil, with up to 87% saturated fat, would be detrimental to health. Yet, just the opposite is true. Due to it’s high content of lauric acid, a substance that is similar to the nourishing components in mothers milk, the short and medium chain fatty acids in coconut oil are easily assimilated by the body with no indication of raising cholesterol or contributing to weight gain. It is also very helpful in regulating metabolism, an especially helpful food indeed for those suffering from binge eating and other eating disorders. The other products derived from coconut also contribute to good health.

Rich, creamy and metabolically satisfying coconut milk and coconut cream are great foods for those who have trouble consuming dairy products due to lactose intolerance. Essentially, then, coconuts and the products derived from them are healthy foods that just about anyone anywhere can benefit from. The old saying “The world is your oyster” can be taken literally when it comes to food. The choices are limitless but understanding both the varieties of foods you choose to consume, why you choose them and your individual limitations are essential.

How about spices? Tropically grown cayenne peppers, chili type peppers and peppercorns have played important roles in traditional diets for millennia having served indigenous peoples in numerous ways. Not only did they add wonderful and exotic flavors to their daily cuisine, they have medicinal qualities as well that have been acknowledged through modern science. These food seasonings have anti-bacterial, anti-viral, and anti-parasitic qualities. Additionally, they increase blood circulation and can help regulate cholesterol levels. There is no reason whatsoever that these foods should not be part of a healthy diet in Northern Scandinavia and other northern countries since people there, like everyone else in the world, are not exempt from the problems these foods can help to resolve. One way of comprehending the importance of a foods role in history is to eat it. It is when you do this that the historical position of that food in a traditional diet is fully realized because it will become you.

There are many fears surrounding foods today but most of these food fears can easily be reduced by first eliminating non food foods from ones diet and replacing them with real traditional foods. Once this is accomplished that once deep-seated fear of food transforms into a sense of respect for all that came before us. This is not to say you should eat all traditional foods from every part of the world, but to know these foods as the original sources of nourishment can open doors for making important choices in your diet that could literally change your life for the better while adding additional sources of nutrition.

Food Character Observations

When you observe other people, you become aware of numerous characteristics that attract your attention. Dark hair, light hair, lean body, over weight, muscular body, big smile, sadness, well dressed conservative, casual elegance, unkempt, charismatic, fatigued, high energy…these and many other characteristics flash through your minds during your daily encounters with others. Some of these are given extra thought and contemplation while others are simply noticed.

We humans have so many physical and emotional characteristics it sometimes makes it difficult to figure each other out but we are all aware that some basic traits are shared by us all and these are what make us human. Whether it is a long-term relationship, new friends, allies in the workplace, any acquaintance really, we are constantly discovering new characteristics of those around us.

The foods you eat also have their unique characteristics and observing their unique characteristics can give you insight into how a particular food can nourish you through its correlations to your body and mind. By observing how a plant grows and develops, you learn about its needs, what it requires to become a food that you will consume. With animals, you can observe their growth and development too but you can also observe their behavior and in the case of factory farmed animals the behavior modifications due to the disruption of their natural lifestyles.

The obsessive-compulsive behavior, often accompanied by osteoporosis, of caged hens is not a characteristic of free-range chickens. The grass fed cow maintains a healthy weight and disposition while the cow raised in confinement contains more fat and tends to suffer from depression and other health problems. It is not far fetched to surmise a psychological connection between factory farmed chicken and the increase of OCD among humans consuming excessive quantities of these animals, just as it is not far fetched to suspect a link to osteoporosis in humans and the same animal. Food psychology is not a new phenomenon. Ancient peoples understood it and it became the foundation of traditional healing modalities throughout the world. Traditional Chinese Medicine and Ayurvedic healing are just two examples that incorporate this natural science.

When observing a leafy green vegetable you can see how it grows upward and thrives on sunshine. Exposed to the elements it withstands torrential rains and continues to grow and expand upward and outward as it inhales carbon dioxide and exhales oxygen. This correspondence is reversed in your respiratory system where your lungs inhale oxygen and exhale carbon dioxide. Green plants then are the respiratory system of the earth and have a direct correlation to your corresponding bodily function.

Root vegetables (carrots, burdock, parsnip etc.) are foods that prefer the dark recesses of the earth, a private food hidden away from the bright sunshine. These foods are highly efficient at absorbing and assimilating water and nutrients from the earth while they anchor and stabilized the whole plant.
These simple observations speak volumes about the energetic properties of these foods.

The firm fleshed winter squash is planted in summer and harvested in fall. As it develops, a long tubular stem nourishes the squash by supplying it with water, inorganic materials and other nutrients the squash needs to develop into a dry, warming sweet-fleshed nourishing food. The process of growth and development is slow and consistent.

Planted in the summer and harvested in the summer, the juicy, sweet watermelon too is nourished by a long tubular stem that pumps copious amounts of water and nourishment to the melon. This causes the development process and growing season to be of less duration than the squash and results in a delicious, moist, cooling flesh. Both the squash and the melon are heavy, sweet and firm on the exterior but their water content and the speed in which they develop differ greatly. The way the squash and melon handle water from their environment has a direct correlation to our kidneys and bladder, two organs responsible for water balance in the human body. The obvious effect of the melon is an increase in urination while the dry flesh of the squash has the opposite effect but that is not all. Each food can have several energetic effects on the body.

Modern nutritional science has recently discovered that some food components, phyto-nutrients and anti-oxidants, choose specific pathways from a complex network existing in the human body in which to travel. The ability of foods to traverse the physiological network of the human body has long been part of the study of food energetics. Just as modern research has demonstrated nutritional pathways, ancient food energetics too goes way beyond the simple ingestion of food into the digestive tract and the excretion of waste from the large intestine.

Thousands of years ago, Traditional Chinese medicine, demonstrated how the flavors of foods choose specific pathways to organs and systems of the human body. For example, the sweet flavor traverses the spleen and pancreas meridian pathways. These pathways (commonly known as meridians) play important roles in the natural healing modalities of acupuncture, herbalism and massage therapy. Of the five flavors, each follows its own specific pathways to a pair of organs in the body.

Using our example of the squash and melon, you can learn about another energetic property through flavor pathways. Both are sweet tasting foods so they will naturally travel the pathways to the spleen and stomach carrying with them their unique energetics or characteristics. The resulting effect on these organs will tend to be as follows. The juicy, sweet, soft and watery melon will tend to have a relaxing, cooling and dampening effect on the spleen and stomach whereas the winter squash will tend to have a tonifying, warming and drying effect on the spleen and stomach. One is not better than the other they are simply different just as every other food is as well. Each has its own unique energetic characteristics.

Other methods of character observations include what happens to a food when combined with other foods through various methods of preparation. Adding fire to foods through cooking contributes to thermogenic (warming) properties depending on the foods density factor and how it is cooked. Pickling foods resulting in fermentation and enhancement of enzymes can change the energetics of foods by opening pathways that would not be traversed were they prepared through steaming, boiling or sautéing. Foods textures, hard, crunchy, soft, chewy…influence the energetics of foods in their own ways as well.

First ask yourself and then ask your friends what their favorite green vegetable is, their favorite root vegetable, favorite grain, animal product (dairy products, meats etc.), favorite fruit… Then apply our third principle of Character Observations and learn about those foods. Discover why you like them so much, why they have become such an intimate part of your life. Learn how they have nourished you in helping you to heal or even how they could be preventing you from healing and being nutritionally satisfied. In the process of discovery you will learn through those foods many things about yourself simply because the foods you eat will become you and in subtle ways you will become them.

The following chart is a representation of dietary proportions roughly based on the well-known mathematical model called “The Golden Rectangle.”

image

This fascinating geometric model has been called by many other names in different parts of the world and different historical epochs. The spiral has often been termed the “universal master form,” and has long been revered as a sacred symbol of life, death, and transformation. Found abundantly throughout nature in both the plant and animal realms, manifestations of the spiral with its logarithmic progressions have been represented in the arts and sciences of every culture, from the most primitive to the most advanced. It is in the logarithmic spiral that we find a consistent thread of knowledge linking the beliefs and lifestyles of the great ancient civilizations through art, architecture, astronomy, engineering—and even agriculture, pastoralism, and food production.

Whether in India, China, the Middle East, the Far East, Africa, North or South America, the study of traditional foods reveals a dietary pattern based on a remarkably consistent wisdom of sensibility, practicality, and proportion. On the surface, these traditions are characterized by obvious cultural diversity: thus, one part of the world may include cereals of a particular species, while elsewhere another species takes precedence. The same holds true for vegetables, animal products, beans, seeds and nuts, dairy products, etc. Yet there is an underlying consistency—one is tempted to say uniformity—in the basic proportions of foods and food groups each culture has used to feed its people.

There are, of course, broad differences based on climate and terrain. Coastal peoples naturally consumed more fish and seafood than inland peoples, who consumed more fowl and mammal products. Yet as fundamentally agricultural peoples, both groups consumed animal products overall in a roughly equivalent proportion to all other foods. While ruling classes often exercised the exclusive rights of the wealthy to adjust these proportions for their personal use, the general proportions were maintained among traditional peoples just as they are today. Some non-agricultural peoples represent exceptions to these principles due to environmental factors or simply because of the fact that they do not practice agriculture. However, even among many hunter-gatherers these proportions of higher quantities of plant foods with smaller portions of animal products are the norm.

Golden Ratio, Golden Age

It is my experience that a diet of traditionally grown foods, eaten in accordance with the principles of the golden spiral, holds the keys to health where so many other approaches fail. The culture bearers of antiquity were able to create a golden age, a state that is possible only when all aspects of life are in harmony. I believe we can achieve optimal health, a clean environment, and new levels of cultural expression by consciously implementing these principles once again.

Careful investigation of the spirallic nature of the traditional dietary pattern leads to a startlingly profound truth at its center: Light is the source of inspiration. This is simply a poetic way of stating that the source of all is the powerful force of creation itself—what some refer to as “God” or by many other names.

This omnipresent, omniscient force charges and generates the process and flow of energy through each stage of this spiral, supplying the raw materials for human sustenance and nourishment. These materials are comprised of macronutrients, vitamins, phytonutrients, carotenoids, and all other known and unknown ingredients inherent in our daily foods.

More than these ingredients, every food also carries specific energetic properties exclusive to its makeup and origin. These “energetics” have the ability to alter our health in profound ways.

THE SPIRAL OF NOURISHMENT

Section 1: Air

When we are born, the first breath, a simple inhalation of air, is the beginning of an incredible journey in self-discovery that is individually unique.

Throughout our lives, air will serve as our most abundant resource for nourishment. Oxygen from the air will spark the fires of metabolism, helping to regulate all micro- and macro-cellular functions in the physical body. Although we experience variations in our respiratory quotients, the air we breathe will supply the greatest proportion of nourishment as energy to support life.

The constant exchange of oxygen with carbon dioxide through the respiratory system plays an integral role in the body’s ability to process all other forms of nutrients. Air is the one form of nourishment that is consumed while you are both asleep and awake. Other forms of nutrition listed in the chart have a proportionally equal influence on oxygen’s effectiveness—right down to the cellular level.

Section 2: Water

Water comprises more of our daily diet than we realize. Fruits and vegetables are mostly water. Grains, beans, soups, and boiled or steamed foods are prepared with water. About 75 percent of the human body is water. If we were to weigh all that we consume at the end of a day, water would weigh in as the second largest quantity, after air.

We require large amounts water daily for cooling the body and cleansing the blood, cells, and organs of toxins and wastes. Water is also essential in maintaining and regulating healthy bowel function and peristalsis.

While pure water is the ideal beverage and the one that hydrates our body cells most efficiently, our ancestors also consumed other liquids, including herbal teas, wine, beer, and water-based herbal remedies.

Section 3: Carbohydrates

The ideal carbohydrates are those consumed by traditional peoples: whole grains, whole grain products (breads, pastas, etc.), pseudo-cereals (quinoa, amaranth, teff), and starchy vegetables (roots and tubers: sweet potato, potato, yam, yucca, squashes, and others). These foods, partly because of their versatility, should make up the largest portion of the diet.

This section also includes supplementary carbohydrates (all fruits and sweeteners). Some typical sweeteners are grain malts, honey, maple syrup, sugar cane, etc. Supplementary carbohydrates were consumed in smaller quantities in traditional diets.

Among the carbohydrates, revered above all other foods, appear the “sacred gifts of the gods”: the grains and pseudo-cereals. Ancient peoples believed these staples held the genetic memory of human origin and the spiritual essence of man and woman.

Section 4: Vegetables

Vegetables include all edible land and water plants. Roots, seeds, leaves, stalks, buds, and marine- and fresh-water algae are but some of the plant forms consumed around the world. Although many vegetables can be considered additional sources of carbohydrates, I have placed them in a separate section because of their dietary importance in the lives of our ancestors.

Vegetables contain a wide spectrum of nutrients and act as neutralizing agents when eaten in proper combination with other foods. For most people, vegetables are sorely lacking, exceptions being those instances where people practice certain ethnic traditions or dietary programs specifically based on ample quantities of a variety of fresh vegetables.

Section 5: Protein

Progressing from greatest quantity to smallest, protein sources follow wild and domesticated plants as the next category in the spiral. This includes a broad range of traditional foods: eggs, wild game, fowl, waterfowl, cattle, lamb, goat, pig, organ meats, dairy products, shellfish, fish, beans, seeds and nuts are some of the many types of protein consumed historically throughout the world.

Section 6: Fats and Oils

Fats and oils have a long history in dietary traditions, consumed either as a part of other foods or as a base used to prepare particular foods. Olives, avocadoes, and nuts are whole-plant sources of fat; salmon, eggs, and beef are examples of animal sources. Some traditional, standalone fats include animal-derived butter, ghee, and lard; and the plant oils from olives, sesame seeds, hazelnuts, palm kernels and coconuts.

Section 7: Essential and Supportive Supplements

This section of the dietary spiral includes herbs, spices, salt, minerals, and natural medicinal foods such as bee pollen, specialized fungi and micro algae. It can also include high quality sources of non-synthetic whole food supplements.

Supplements have several functions. They can enhance digestion, aid assimilation, improve flavor, raise or lower blood pressure, increase kidney or liver efficiency, and strengthen the immune system. Peppercorns and certain other spices possess anti-viral, anti-bacterial, and anti-fungal properties.

Additional Considerations

The area of each section represents roughly the proportion that type of food amounted to in our agricultural ancestors’ daily diets. These proportions hold true for many generations back into perpetuity.

Each section of the chart is proportionally dependent on every other section for proper metabolic balance and health maintenance.

While the chart divides different foods into sections or categories, bear in mind that in the context of nature and tradition, this represents a single continuum: you will find most foods and food groups overlapping with the preceding and/or following categories. For example, “beans and nuts” embraces nutritional combinations of fats, proteins, and carbohydrates in varying amounts.

The sections are also designed for easy recognition of the food sources most commonly accepted as fitting in that section. For example, beans (legumes) are only about 25 percent protein, yet they are most commonly thought of as a protein source and have therefore been placed in the protein category. As another example, many cheeses are high in both protein and fat and could logically be ascribed to either category. Yet, like beans, cheese is more commonly thought of as a protein food, so it is grouped in that section. Leafy green plants, which also provide some protein, are grouped in their own section, as explained above.
Plant protein has an incomplete amino acid profile, while animal proteins contain all essential amino acids necessary for the body’s growth and cell repair.

Plant oils have a different effect in the body than animal fats, and each type of oil or fat has its own unique effect as well. In fact, the body responds uniquely and distinctly to each different food.

The Energetics of Food

September 13, 2004

In this age of chemical additives, preservatives, and the onset of GMOs (genetically manipulated organisms), we are faced with serious concerns about the quality of our food and its effect on our health.

What worsens these concerns is the inordinate difficulty of sorting through all the “newest information” to make any reasonable sense out of an increasingly bewildering picture. Multinational corporations and private interest groups “buy” food science to promote their special agendas, each tailored to support profits in the marketplace. As a result, we are routinely bombarded with an at times bizarre assortment of food theories and fad diets, each solemnly pronounced to be beneficial to our health, a “breakthrough” that will increase our chances at longevity and lean fat free bodies.

Indeed, many of us experience such an information overload that we are no longer able to ask the right questions and think for ourselves. To help remedy this situation, I offer food energetics as a simple way to understand and choose food.

QUALITY IS HEALTH

At first hearing, the term “food energetics” may sound like yet another diet theory; quite the opposite is true. Food energetics is an effort to distance us from fad and theory, to start thinking for ourselves and simplify our daily eating habits in a way that can be exceptionally rewarding.

What do you think of when you talk about health and hear the word “energy?” Perhaps words like vitality, stamina, endurance,and mental stability come to mind, as well as other terms that all have to do with feeling good. Thinking of food as being “energetic” suggests the idea of vibrant, healthy, enduring foods, foods that are themselves well nourished.

Grade-school science classes taught us that plants need sunlight, air, water, and soil to grow. True enough, but equally important is the question of quality. Superior plants make superior food; the importance of quality cannot be overemphasized. For example, the healthiest soil contains a balance of many minerals and microorganisms. The highest quality (most energetic) food sources are those grown via the most natural methods. “Wild” or “wild-crafted,” “biodynamic,” and “organic” are terms associated with foods grown with natural methods. “Commercial,” “hydroponic,” and “genetically engineered” are descriptions of those methods that are both more artificial and by far the predominate ones on the market today. Unfortunately, such artificial methods cause their produce (whether plant or animal) to progressively lose its integrity (identity), energy (energetic health), and nutritional value.

Quality nourishment is the first principle of food energetics.

ESSENTIAL CHARACTER

The second principle is “essential character.”
Each food has qualities that make it what it is. For example, a cow is a cow—by its character, it is not a goat or sheep, and certainly not a chicken. The essential character of a food is what gives it the identity we recognize as an orange, duck, fish, or cow. There are four basic patterns that make up this principle; each food has a predominance of one of these patterns. These four patterns are:

 

These four patterns help to better understand the nature of a particular food. A carrot, for example, grows underground, and not in a rounded path (like a turnip) but fairly straight down, to a tip. Considering the four patterns, we see that “down and in” best reflects the growth pattern of the carrot.

Being a root vegetable, the carrot absorbs and assimilates vital elements from the soil for the ultimate benefit of the entire plant. This is also true for other root vegetables: they are the portions of the plant that absorbs and assimilates elements from the soil and firmly anchors the entire plant into that soil. Yet the turnip, though it is also a root vegetable, doesn’t exhibit the same characteristic pattern as does the carrot. Its growth rather follows the pattern, “down and out,” revealed in the outward, bulging direction that gives it its round shape.

What about leafy green plants, such as collard greens, kale, and lettuce? Which of the four patterns apply to them? Leafy greens grow upward with leaves that spread outward: we would say they are “up and out.” During photosynthesis, green leaves take in carbon dioxide and give off oxygen. Green plants are also high in chlorophyll which helps supply our blood with oxygen.

Let’s take the principle of essential character a step further and see if it can be applied to the systems of the human body.

Do you recall the description above of roots as absorbing and assimilating nutrients for the entire plant? Is there an organ or organ system in the human body that performs the same function? Indeed there is: the gastrointestinal tract,especially the small intestine,where most of our food’s nutrients are absorbed into the bloodstream. Therefore, it would be a reasonable step of energetic logic to propose that root vegetables and the GI tract share a similar or related essential character.

Let’s apply the same transposing logic to the essential character of leafy greens. Their function is the exchange of gasses: taking in carbon dioxide and giving off oxygen. Our body functions in a similar but opposite way, taking in oxygen from the environment and giving off carbon dioxide—a process associated with the respiratory system. Again, we have a function in the plant world that is mirrored and paralleled in a function of the human body. Here again, the metaphoric logic of food energetics suggests that it is precisely that food with a particular function that best nourishes and supports the corresponding human organs’ echoed role in the human body. This correspondence has historically been known as the “Doctrine of Signatures” by traditional peoples throughout the world.

TEMPERAMENT

A third crucial principle of food energetics is “temperament.”
The temperament of a food can be compared to the various psychophysical states within people. We describe certain individuals as being “hot-blooded,” others as “cool” or even “cold-blooded.” We instinctively refer to others as having warm personalities but, perhaps, a dry sense of humor. As in the discussion of essential character, these are more than merely fanciful observations: there is something intrinsic to these different people that is quite real and goes to the core of their nature.

Precisely the same is true of foods.
The four temperaments of food are hot, warm, cold, and cool. Each temperament also occurs always in one of two variations, either dry or damp. These permutations produce eight possible temperaments:

• hot and dry;
• hot and damp;
• warm and dry;
• warm and damp;
• cool and dry;
• cool and damp;
• cold and dry;
• cold and damp.

Let’s look at how these temperaments associate with a human personality.
If a person is angry to the point of physical aggression, we can easily surmise that his temperament is hot. If that same person perspires profusely, we can describe his temperament as hot and damp.

Of course, we all have the capacity to control our extreme temperaments, and rarely does anyone exhibit the same temperament all the time. For that matter, there is not a food in nature that exhibits only one, solitary, unchanging temperament. Temperament, whether of a person or a food, can be thought of as a tendency, and one that is subject to change.

The temperament also reveals the potential effects a particular food might have on the body and mind of the person consuming it. For example, fats and proteins help generate warmth to our bodies through a process called thermogenesis. Vitamins, enzymes, minerals, carbohydrates, tend to have cooling effects on the body.

When classifying foods according to temperament you will discover several variables. For example, consider the difference in temperaments between a cow (beef, red meat) and a chicken. The large, mammalian cow is high in protein comprised of long twitch fibers, rich red blood (hemoglobin), and saturated fat, giving this food a hot and damp temperament. Chicken breast meat, by contrast, is white with short twitch fibers, is less fatty, and contains less hemoglobin than red meat. Chicken tends toward a warm and dry temperament. Each will affect the body differently because of these different temperaments.

Every food has its own unique temperament.

PREPARATION

It is also important to realize that any temperament can be altered, sometimes significantly, through food preparation methods, i.e. cooking, fermentation, drying, etc. In the study of food energetics, method of preparation is the third and final consideration (after essential character and temperament) in determining how a food will nourish us.

As a simple example, consider a raw carrot, with its character pattern of down and in and a temperament of cool and dry. When sautéed with oil and seasoned with salt, it maintains its essential carrot character of down and in, but its temperament transforms from cool and dryto warm and damp because of the added oil, salt, and heat.

Cooked foods are generally associated with warm temperaments, while raw foods tend to have cool temperaments. A raw green salad, low in fat and protein but high in water, vitamins, and enzymes, will tend to have a cool and damp temperament that will impart a cooling effect on the body.

To consider an example of how this understanding might be applied to health: Imagine a thin, frail person, living in a northern climate and suffering from a chronic condition of cold hands and feet. It doesn’t take a great deal of common sense to realize that he needs to increase warm foods, perhaps those richer in protein and fat. He would be wise to reduce cool foods, such as raw vegetables, fruits, and juices in order to correct his condition through diet.

Now, this is not a black and white process, nor does it serve well as a prescriptive straightjacket. The understanding and application of food energetics has to do with personal choice — with the individual condition and a commonsense approach to applying these principles in real-life situations. As we learn more about food, we learn more about ourselves, our own preferences and natures and tolerance levels. Dietary extremes tend only to lead inevitably to extremes of equivalent force, only in the opposite direction!

THE SCIENCE OF NATURE

Another way of describing the principles of food energetics is to say that this is one approach to studying nature’s science.

Traditional peoples throughout the world were and many still are intimately connected to nature, and it is through this relationship that they learned over time to apply this “science of nature” to all aspects of their lives. As their knowledge increased, their application of natural laws became the traditional wisdom from which today’s natural healing arts draw their scientific and philosophical foundations. Two of the most widely known of these healing arts are traditional Chinese medicine and the Ayurvedic medicine of India; however, there are many others, equally valid and powerful, that are still practiced today by traditional peoples around the world.

By combining ancient wisdom with modern nutritional research and our own creative understanding, we can arrive at a genuine science of nature that will allow each of us to grasp the true meaning of the French gastronome Brillat-Savarin’s famous phrase, “You are what you eat.”1

Energetic principles are easy and fun to apply. I strongly encourage you to consciously choose a wide variety of foods to experiment with, because food variety contributes to vibrant health and an open mind. Explore and enjoy the foods you eat—because ultimately they will become you!

1.Widely described as “history’s greatest gastronome,” Jean Anthelme Brillat-Savarin spent time in the United States following the French revolution, supporting himself giving language lessons and playing violin in a New York theater orchestra as he introduced new and exotic dishes to his American friends. His most admired work is La Physiologie du Gout (Physiology of Taste) (1825). His actual expression was, “Tell me what you eat and I will tell you what you are.”

Traditional Ancestral Diets

September 13, 2004

Two Ancestral Lines

When considering traditional ancestral diets as a model from which we can draw to improve our own food choices for better health, we must understand that these diets vary considerably according to a host of factors, including soil conditions, cultural habits, changing weather, availability of resources, and more. Despite these smaller variations, we can place traditional diets broadly into one of two categories depending on whether the people who practiced them were hunter-gatherers or agriculturalists.

Conventional historians assert that the earliest traditional human diets were those of the hunter-gatherers; however, this established theory is now being questioned by a host of alternative historians. Explorers, anthropologists, and other scientists have offered compelling evidence of long-lost agriculturalist civilizations that had reached levels of development equal to or more advanced than our own.

The theory for advanced agricultural civilizations in prehistory parallels challenges to the mainstream theory of cultural evolution, which proclaims a slow evolutionary process from primates to primitive humans who used stone tools and fire, culminating in the first civilizations between 5,500 BC and 3,500 BC. While this “cultural evolution” orthodoxy has its supporting evidence, its theory is limited by the hypothesis that civilization began about this time and that all prehistoric humans lived as hunter-gathers until about 10,000 years ago, when they began to discover agriculture. The alternative point of view suggests that both agricultural and non-agricultural peoples may have coexisted for thousands or even tens of thousands of years—long before the accepted (and more conservative) estimated timeline for the advent of agriculture.

As one explores publications with conventional and alternative perspectives on ancient history, a profound realization begins to emerge: We modern humans have been around for a long time and have experienced numerous cycles of catastrophic destruction. Through sheer tenacity and the will to survive, we have repeatedly emerged from the ashes of destruction to rebuild civilization.

Today, many historians blame our agricultural ancestors for the downfall of several civilizations, when the true causes were more likely drought, floods, fire, or any other number of natural phenomena. Civilized Homo sapiens have lived through ice age conditions and numerous other periods of setbacks, surviving it all along with primitive humans and a variety of other primates.

Many of the primitive humans of Paleolithic times did not participate in cultural advancement beyond their basic living needs and have survived outside the boundaries of civilization for countless generations. Estranged from urban living, some of these groups of prehistoric hunter-gatherers learned to make and use stone tools and have continued to do so for hundreds of thousands of years. Sometimes their slow-paced development came to an abrupt halt when they were conquered by other hunter-gatherers or by agricultural peoples.

Today, in some parts of the world, indigenous tribes of hunter-gatherers continue to exist in much the same way as those of antiquity; at the same time, colonizers from powerful nations continue to seek new lands and peoples to conquer. Things haven’t changed that much for the hunter-gatherers, and the “stone age” of the past, in some ways, is alive and well today.

Apart from our two categories of traditional peoples, past and present, we can define a third category of humans: one that is growing at a phenomenal pace and may be destined to replace the other two traditional groups altogether. This third category represents an extreme departure from our two natural ancestral dietary traditions, moving to one based on artificial foods. The people in this group use an “imitation diet,” which we shall touch upon near the end of our discussion. For now, let’s get back to our two ancestral groups and their diets.

Clues from the Past

It is important to keep in mind that anthropological and archeological conclusions are very often based on hypotheses and conjectures derived only from evidence that conforms to, or is made to conform to, preexisting paradigms. It is through this exclusive evidence that history is then reconstructed and often presented to the public as though it were fact. By re-examining this evidence and combining it with other reliable sources, we are able to create alternate theories and arrive at different conclusions from those we have been given. Let’s consider the primary means of obtaining supportive evidence, see how it is used to formulate commonly accepted beliefs about Paleolithic dietary history, and realize how these conclusions are not the only interpretations possible.

There are four basic means of obtaining evidence when trying to understand our ancestors’ traditional diets; other methods are usually extensions or variations of these four.

A. The analyses of stone tools, animal bones, and charred seeds found in prehistoric sites, mostly in or around lake settlements, hearths, fire pits, and caves.
B. Examinations of a few prehistoric human and other primate specimens, mostly incomplete fragments of skulls and skeletons.

C. Examinations of the world’s prehistoric cave and rock art, depicting hunting scenes. This method also includes other art forms, such as pottery and textiles.

D. Cultural comparisons between ancient hunter-gatherers and modern hunter-gatherers. These also include comparative analyses of bones, teeth, and genetics of pre-agricultural and agricultural peoples to speculate and generalize about their health characteristics as compared to one another.

Each of these examples of evidence, regardless of the context in which they are found, is conveniently placed in the context of a single established theory: the cultural evolution theory. However, all four data collection methods have resulted in additional, anomalous evidence that does not fit the accepted paradigms; such anomalous evidence is often simply discarded.

For example, anthropologists have found human bones and stone tools in North America dating thousands of years before what they considered the earliest human occupation of this region. Also, many examples of very ancient stone tools have been found throughout the world that show craftsmanship superior to more recent examples. This suggests that stone tool making did not always evolve gradually or consistently, as is usually suggested. Agricultural tools, too, have been found in early strata, dating well before the accepted timeline for agricultural origins.

Historical dating is based on the assumption of uniformity, a gradual and consistent depositing of strata over millions of years. This idea of uniformitarianism does not fully consider the evidence for the many global catastrophes that have occurred throughout history, sometimes with effects so devastating that virtually all remnants of civilizations were obliterated. Human relics, dinosaur fossils, and any number of other remains have been found mixed in strata dating back millions of years, when each theoretically should have been found in its own particular stratum, far removed from all others. This sort of evidence appears all over the world and clearly proves that nature does not always behave in a consistent, regulated manner.

Nevertheless, for convenience and consistency’s sake, I will use the standard dating sequences of human remains and other historical artifacts for purposes of discussion throughout this article.

Lack of plant evidence in strata at some Paleolithic sites seems to indicate that hunter-gathers consumed mostly meat, with few fruits or vegetables, because animal bones were often found in abundance around prehistoric hearths, while plant remains rarely show up in very early strata. However, plants have a much higher decomposition rate than bones or stone tools, and this is the reason we find little evidence for plant consumption in ancient strata. Pollen samples are often used to obtain scientific data on prehistoric plant matter, but results can vary considerably.

While animal bones and carbonized wild grass seeds found in prehistoric lake settlements seem to indicate that meat and wild seeds were part of the hunter-gatherers’ diet, we should not assume that no or few plant foods were consumed just because few flora samples show up in prehistoric strata. Some obvious exceptions to diets including plant consumption would be ice age sites, where people similar to modern-day Inuit lived and whose environment lacked a climate suitable for plant growth. But while evidence from ice age settlements is plentiful, these sites do not represent the only lifestyle of early humans, who had to endure changing global conditions over hundreds of thousands of years. Even the regular inclusion of wild and domestic cereal grains by prehistoric hunter-gatherers, is now suggested by some historians.

For example, we find evidence today of early agricultural tools in several areas of the Near East where groups of robust peoples we call the Natufians once lived.

Of particular interest is the presence of sickle blades, sickle handles and even some intact sickles. The blades often have a sheen or gloss, which is taken to indicate that they had been used to harvest cereals, either wild or tame. Grinding and pounding equipment, both stationary and moveable, was also abundant. All the equipment for cultivating cereal grains is present in the Natufians’ industries, but there is no indication that either plants or animals were domesticated. The Natufian people lived in an area in which wild wheat and barley are abundant today and presumably were abundant at that time.2
The Natufians are a good example of grain consumers whose skeletal remains reveal robust health. In fact, there are many indigenous hunter-gatherers living today who include a large quantity of plants in their diets. The assumption that agriculture and the domestication of cereal grains began for the first time in the Neolithic period is tenuous.
While we can no longer deny the regular use of wild grasses by some early hunter-gatherers, we must also understand these early peoples may have had uses for wild grasses other than as food. Examples of charred wild grasses have been found in Paleolithic sites and are often interpreted (in attempts to support the evolution of cultivated plants theory) as primitive examples of pre-agricultural food sources from which later evolved cultivated cereals. While in some cases this is possible, as a few Paleo feces have revealed the remains of grass seeds, it is more likely that this evidence represents the early use of grasses for fuel, baskets, bedding, or any of a range of other purposes commonly found today among the world’s non-agricultural indigenous peoples.
Ancient Structures and Primitive Art

The first evidences of civilization around the world are megalithic stone forms so old that archeologists have trouble assigning dates because of the difficulty in dating stone with radiocarbon dating technology. Even if it could be dated, the dated age of such an artifact would not necessarily tell us when the megalith was cut and placed. Therefore, most of the dates assigned to these structures are based on pottery, bones, and other artifacts found near or on the sites. This, too, can be misleading: many of these artifacts are often remnants of cultures that followed the megaliths by thousands of years. In other words, many of these structures could be thousands of years older than suspected; some even show signs of vitrification.

These ancient stone structures obviously were not the work of early hunter-gatherers, who would have had neither any reason to build such edifices nor the technology to do so. The world’s many examples of monumental architecture were obviously constructed by technologically advanced agricultural peoples living in prehistoric times. And while we often think of the builders as primitives using the crudest of stone and metal tools, some of these megaliths could be produced today only with the most advanced equipment available—and some could not be reproduced at all, even with modern technology.

We refer to these nameless ancient experts of masonry simply as the “megalith builders.” The enigmatic walled fortress of Sacsayuaman sitting high on the hilltop above the charming city of Cuzco, Peru, is one example of their work. Technologically perfect, these stones easily pre-date the Western timeline of the earliest Incas; the techniques used for cutting and finishing these gigantic stones are no longer practiced. Indeed, later Incan and modern Peruvian stone works pale in comparison. The Incas themselves never took credit for these megalithic structures, attributing them instead to ancient giant culture-bearers. Throughout Peru one can find numerous examples of ancient Peruvian stone works piled in rubble upon gigantic blocks of precision-cut, magnificently crafted stone, built perhaps by some long-forgotten ancestors of the Incas. Though expert stone masons in their own right, modern-day Incas cannot reproduce these works. Who these builders were remains a mystery as we regrettably concede that these great methods in engineering and technology have been lost.

On the other side of the world at Ba’albek lies the undated and massive Trilithon, a gigantic stone platform consisting of three massive stones that were expanded upon with later Greek and Roman architecture. No one from known Western civilization could have originally built the Trilithon, though, because no one during that time period had the technology necessary to move three gigantic, 870-ton stone blocks over rough, uneven terrain, then raise them ten meters onto a platform and set them so precisely. It is highly questionable whether we could accomplish this feat even today. There has yet to be an orthodox explanation for these Herculean feats from our prehistoric past that can be demonstrated and proven.

Many ancient megalithic structures also remain as underground tunnel works that extend for hundreds of miles into the subterranean world. Someone was using a very sophisticated technology many thousands of years ago to design these megalithic structures, few of which have ever been equaled by anyone later in history. If we are to believe the current paradigm that proposes that these and numerous other superhuman feats were accomplished by humans, we have little choice but to accept the existence of advanced civilizations in prehistory.

Ancient Cave Art

Prehistoric cave paintings in Lascaux, France date between 20,000 and 40,000 years ago. These and other cave paintings throughout the world reveal an extraordinary sophistication, not only in the artwork itself but also in the subtle messages portrayed. The Lascaux art, for example, uses the natural contours of the rocks to give three-dimensional appearances to many of its figures, which almost seem to come alive when viewed at different angles.

Many of the world’s cave paintings are so unusual and modern in scope that few could possibly believe primitive cave men could have made them. Still others are so completely mind-boggling and difficult to interpret according to accepted theory that they are intentionally left out of history books. An historian might suggest that the cave paintings portraying hunting scenes of mammoths and other animals are the only examples depicting actual daily life during prehistoric times. However, within this context we are left with little choice but to write other paintings off as representing the influence of mere superstition, or perhaps drug-induced hallucinations—yet these explanations do not account for the amazing skill level and exceptional attention to detail these works display.

Although there are many available caves throughout the world, cave dwellers today are few and far between, which suggests that people choosing to live in caves during prehistoric times probably would have done so only for extreme reasons. Cave dwellers may have resorted to this lifestyle to protect themselves from an inhospitable climate, for example. Exposure to predators was another concern for these early humans; the saber-toothed tiger and giant bear were two likely man-eaters that early cave dwellers would have needed to avoid. Troglodytic existence is quite rare among indigenous peoples today.

So, who were these ancient cave artists? A few experts suggest that Cro-Magnon men were the artists responsible for some of these paintings. Perhaps, but basing this on the assumption that primitive Cro-Magnon or other hunter-gatherers were the only available artists at the time doesn’t make this conclusion true. What about other cave paintings with high levels of sophistication that we find around the world? Hunter-gatherers today don’t even paint in caves. There are cave paintings dating back 11,000 years in Australia, and modern native aborigines assure us that the paintings were made long before their arrival.

What are we to make of a scene that details ten stars of the night sky, the seven stars of the Pleiades and three other constellations—four of which we need telescopes to see? Astronomers say these four stars were not visible in prehistoric times. Could a human with exceptional eyesight have seen them and later documented them in a painting? Could the artist have experienced astronomical awareness while astral traveling under the influence of a psychotropic drug? While either of these two unlikely explanations is possible, explanations that are more plausible might be: 1) that the artist learned astronomy from someone with that knowledge, or 2) that the artist was himself an astronomer who had survived a cataclysmic event. Such an event might have forced him or her to revert to a primitive lifestyle staying in the cave, either temporarily or long-term. Perhaps the cave art was a message intended to assure future generations that survivors from a sophisticated civilization had been there.

There are many examples of extremely ancient art that appear more advanced in technique than more recent ones. Cave art can be likened to modern graffiti in that the artists create it as a message on a wall. Sometimes the message is misunderstood or misinterpreted by people analyzing the work, while in other cases the message is perfectly clear. Just as there are variances in the sophistication of modern graffiti, ancient cave art also has its levels of expression, ranging from crude to artistically intelligent and precise. We have examples of “mixed messages,” with a scene in simple stick figures hunting game, other depictions of similar scenes in exacting detail, and still others depicting entire complex panoramic scenes. What is remarkable is that traces of vibrant colors in some of these more sophisticated cave paintings are evident even after many thousands of years. Color retention in paint is a demanding craft that has evolved in modern times only gradually and which paint makers still strive for in the art industry today.

In response to the question of how exquisite ancient artworks were created in the dark recesses of caves, we are told that these prehistoric artists used torches or, in some later examples, oil lamps in order to see what they were painting. This answer seems reasonable until we consider some contraindicating details some alternative historians have revealed. For example, some of these sites lacked the ventilation necessary to support the prolonged burning of fuel. For whatever duration the torch or lamp could burn, the artist would probably have had to endure breathing the heavy, toxic smoke resulting from an inefficient fuel source. The lack of ceiling soot in some of the caves also suggests that no fire whatsoever was used.

This is also the case with some of the Egyptian subterranean artworks. Some Egyptologists answer this apparent mystery by explaining that the early Egyptians used mirrors of highly polished silver or copper. However, we have to question the practicality of this explanation: many of the long, narrow underground passageways leading to this artwork have sharp turns that would have made mirror placement extremely difficult if not impossible. Prehistoric cave art has also been found in high, dark, and almost inaccessible parts of caves where the artist would first have to have built some form of scaffolding (unless they were extremely tall) to access the rock “canvas.”

The question of what could possibly have been the light source for some of this art is still a mystery. Perhaps some of these examples were created with salvaged technological remnants of a collapsed civilization.

Wall paintings have not been the only kind of prehistoric art found in caves. Recently, figures carved from mammoth tusks dating from 30,000 years ago were found in a cave in southwestern Germany. The figures are realistic depictions of a water bird, an anthropomorphic lion-man, and a horse’s head. They are said to rival the quality of work from the high civilizations of 3,000 to 4,000 BC. These figures call into question the theory that human’s artistic skills have evolved gradually in a single continuum. Several sources document news of the discovery.

The researchers said they believed the figurines were created by early anatomically modern humans… Radio carbon dating used to date the carvings is inexact, but the objects were almost certainly made between 28,000 and 35,000 years ago….3

Another article focuses on the primitive shaman theory, since the bird is a common shaman motif. Many tribal groups today practice some form of shamanism; however, this is only one explanation for the carvings’ subject matter because the quality of these carvings are of very high quality. Yet another source challenges the theory that there was a gradual evolution of artistic skills by stating:

The carvings…are considered to be the same vintage as 20 similar ivory artifacts, including ornaments and musical instruments, found in nearby Swabian digs. They join a clutch of other archeological surprises, including the intricate French cave drawings in the Grotte Chauvet and the discovery of a sophisticated use of textiles and clay in what is now the Czech Republic. They all debunk the notion that art developed over eons at about the pace that Homo sapiens moved out of the cave. I guess the bottom line is we’re dealing with people who are at a cultural level very similar to ourselves.4

Also relevant is this article’s mention of the discovery of cooking pottery found in a cave in China:

The cave yielded the country’s most primitive potsherds, estimated to be 12,000 years old. Like any technological innovation, the creation of pottery is believed to have been embedded in some cultural context.5

The Tassili frescos of the Sahara Desert are another example of highly sophisticated art. This inaccessible area is rarely frequented by anyone other than the indigenous Tuaregs. Like many other indigenous peoples throughout the world, they have no idea who the artists were who created the ancient art in their environs. Along with scenes of hunting are scenes of otherworldly, round-headed humanoids, bird headed people, and both dark- and light-skinned women, dressed as though they just stepped out of the latest fashion magazine. The paintings and carvings are prolific and could date from between 2,000-10,000 BC.6 Were the artist-hunter-gatherers exercising their creative skills under some potent herbal drug that gave them visions of the future? Or were they survivors from a lost civilization leaving records for future generations to interpret?

First Farmers, Later Hunter-Gatherers

In the dense Brazilian rainforest archeologists are shocked to find a 1000-year-old, 15-square-mile network of towns and villages that were connected by a system of broad, parallel highways. The reason researchers were shocked is because it has long been thought that the pre-Columbian rain forest had always been a wild ecosystem unaltered by humans and occupied by various hunter-gatherer tribes. The indigenous Xinguano and Kuikuro tribes now living there were unaware of the accomplishments of their ancestors until this discovery. Following are some highlights from news articles:

Ancestors of the Kuikuro people in the Amazon basin had a “complex and sophisticated” civilization with a population of many thousands during the period before 1492. These people were not the small mobile bands or simple dispersed populations that some earlier studies had suggested…the people demonstrated sophisticated levels of engineering, planning…in carving out of the tropical rainforest a system of interconnected towns making up a widespread culture based on farming…. The people also altered the natural forest, planting and maintaining orchards and agricultural fields…7

In reference to the ancient settlement that included raised causeways, canals, and other structures, the article states, “They are organized in ways that suggests a sophisticated knowledge of mathematics, astronomy and other sciences….”8

While 1000 years may not seem such a long time, it certainly was long enough for the people living a hunter-gatherer lifestyle to develop amnesia and forget completely where they came from. Current aerial photographs indicate that the entire Amazon forest may have been engineered with settlement mounds, irrigation canals, agriculture and roads at some time in the distant past. These new findings are literally shattering the “pristine myth” that the Americas, before being discovered by Columbus, were an untouched Eden occupied by primitive hunter-gatherers.

It is not known when this massive engineering project took place, but it could have occurred numerous times over thousands of years, each time ending with the lush forest completely engulfing the long-abandoned areas of development. Excavations at many neighboring South and Central American pyramids reveal a repeating history of building and rebuilding by subsequent settlers.

These discoveries also question the origins of what appear to be wild food plants in the Amazon forest. Perhaps many medicinal herbs and food plants gathered by resident tribes today are but free-running examples of what were once cultivated crops of ancient agriculturists. Dates for other recently discovered agricultural sites in Peru and Bolivia are being pushed back nearly 5,000 years as long-standing theories are being challenged.

In the desert of the Supe Valley, near the coast of Peru, lie the remains of Caral, a city that flourished nearly 5,000 years ago. Findings reveal a peaceful city of pyramids and homes founded on farming and trade. Spanning 35 square miles, Caral all but destroys the popular theory that civilization was the result of warfare. While discussing ancient agriculture in South America in his book The Living Fields, Jack R. Harlan refers to other researchers: “Levi-Strauss (1950) and Lathrap (1968), among others, have suggested that most, if not all, hunter-gatherers in South America are ‘drop-outs’ from farming.”9

In Graham Hancock’s seminal work Fingerprints of the Gods we find another example of “lost agriculture” in Egypt. On pages 412–413, he refers to Hoffman’s Egypt Before the Pharaohs and Wendorff’s and Schild’s Prehistory of the Nile when discussing mysteries of “Paleolithic agricultural revolution.” Grinding stones and sickle blades used in the preparation of plant foods were found in the Nile valley and dated to around 13,000 BC. While this may not be so unusual, what makes it interesting is that fishing declined in the area at this time and barley suddenly appeared—just before the first settlements were established. Moreover, hunter-gatherers replaced grinding stones and sickles with stone tools about 2,500 years later. Based on the evidence, Hancock suggests that agricultural practices were established around 13,000 BC in Egypt, but the great Nile floods of 11,000 BC led to the abandonment of agriculture and caused a prolonged relapse to a more primitive lifestyle.10

How many other ancient civilizations lost their agrarian-based cultures to an adaptive hunter-gatherer lifestyle? Such may have been the case with the ancient pre-civilizations of Egypt, China, Mexico, Indus Valley, Sumeria and others that later re-emerged as what now appear to be our “earliest examples” of civilizations.

Sites of large urban developments from antiquity have been found in various areas of inland and coastal regions. Ancient urban peoples clearly used their resources and knowledge of agriculture in harmony with nature to effectively support their growing populations. Some of these cities were comparable with modern cities in size and population, complete with sophisticated waste management systems, drainage, running water, and irrigation canals. The Giza plateau in Egypt contains an elaborate maze of underground tunnels carved out of solid limestone bedrock with precise right-angle turns that stretch for miles. Modern research in this area suggests that these tunnels represent an elaborate irrigation system used to transport water from local rivers to what were once neighboring cities and their agricultural centers. Other records available from these civilizations reveal lifestyles embedded in the advanced sciences of agriculture, astronomy, architecture, and engineering.

These qualities do not appear to have been an evolutionary process; in most cases throughout the world, they rather appear to have been a legacy left by previous civilizations. These and other findings help to prove the capabilities of ancient humans, dispel the “all pre-agricultural peoples were primitives” theory, and strongly support the notion of culturally advanced people with sophisticated abilities living in prehistory.

Time and again, when discoveries do not support an orthodox theory because that theory would crumble if the controversial evidence were made known, those discoveries have been kept from public awareness. The fact of such a state of “withheld evidence” is undisputable.

Why has so much evidence not supporting accepted theories of human history been dismissed? Raising this question would of course be unnecessary if all the puzzle pieces of history already fit nicely into place, but they don’t. Ideally, the world’s recognized anthropologists, archeologists, and historians would assemble, discuss all the evidence available, decide how the information pertains to all possible theories, and present their findings to the public. Until that day, dissatisfied newcomers will need to investigate sites, dredge through archives, run tests, publish their own findings, and speak out to gain credibility for an alternative theory.

When we start to include all the rejected pieces of this extraordinary puzzle, it becomes crystal clear that there is a great deal more to human history than conventional theory would have us believe. These rejected pieces are the very information that could help solve one of life’s greatest mysteries. Until all the cards relative to the traditional lifestyles of ancient peoples are placed on the table, we are left with no choice but to seriously entertain the idea of planet-wide coexistence between hunter-gatherers and agricultural peoples in prehistoric times.

Stature and Health Among Traditional Peoples

Some anthropologists claim our hunter-gatherer ancestors were taller than the agricultural types—and therefore healthier. The idea of greater height as a barometer of better health is typical of the Eurocentric point of view stemming from early anthropological research; just as typically, when examined in a global context, both presently and in the fossil record, it is both misleading and incorrect. I have personally witnessed robust health between both types of traditional peoples from different parts of the world of varying heights and builds.

Oxygen, carbon dioxide, and radiation levels, among other environmental conditions, vary immensely throughout our past, all likely affecting the conditions of flora and fauna, including hominids. Fossil evidence from Paleolithic times generally shows most flora and fauna to be quite large. Using the “man is an evolved animal” theory, is it so unusual to find larger hominid fossils during Paleolithic times as well? Paleolithic flora specimens appear gigantic as compared to their counterparts, even in the Neolithic period. Fauna specimens of the Pleistocene period, including those of the wooly mammoth, giant sloth, saber-toothed tiger, and giant bear, are much larger than the mammals that followed.

Today, many free-ranging, domesticated ruminants are smaller than their wild counterparts, yet they are not less healthy simply because they are smaller and domesticated. Can we unequivocally say that prehistoric mega-fauna and -flora of a particular era were healthier simply because they were larger than the specimens of a later age? Not really. What we can surmise is that many plants, animals, hominids, and humans from a particular time in pre-history differed from many of those that followed, and that each adapted as much as possible and were suitable to the environment of their day.

Just because the cranial capacities of Cro-Magnon and Neanderthal were larger than that of today’s human being doesn’t mean they were smarter than we are. To assume this would be the equivalent of saying, “All generally tall Germanic peoples are smarter than generally shorter Japanese people.” Many tall traditional peoples have robust health, but so do many short traditional peoples, some averaging less than five feet in height. The agricultural highland Peruvians, for example, are short in stature and healthy, according to the studies and research of Weston A. Price.11, Early western explorers of South America and Mexico were often carried over treacherous mountain terrain for miles on the backs of sub-five-foot-tall Peruvians and Mexicans. These same tiny agricultural people of the Peruvian highlands could run for 30 miles or more, starting at 9,000 feet above sea level, where the air is extremely thin, to coastal regions, and then return the same day with fish they had caught or traded for their ruler’s dinner—hardly examples of weak, unhealthy people.

To accurately study the history of stature and health in our ancestors, we would have to include the giant skeletons from the many fossils found throughout the world. In the mid-1800s and up through the early 1900s, many human skeletons ranging in height from seven to 18 feet were found in North America and around the world. These fossils were excavated from mounds, caves, and many different levels of strata. Some dated back to the Jurassic period, over 185 million years ago. Newspaper reports, along with many reputable witnesses, attest to the truth of the discoveries, yet none of these numerous unusual fossils have ever been entered into the fossil record. Some of these skeletons were found with axes and stone tools. Some specimens had even gone through the process of mummification. Were these giants healthier Homo sapiens than average-sized Homo sapiens, or were they a different species altogether!

Generalizations on health and stature between ancient agricultural peoples and hunter-gatherers based on fossil evidence can be misleading, as it is not clear how many ancient agriculturists had reverted to the hunter-gatherer lifestyle in prehistoric times, nor do we know how many cycles of agriculture and civilization there may have been in our complex history as Homo sapiens. The possibility of such a reversion process, demonstrated by the modern examples of the Amazonian tribes mentioned earlier—whom scientists once thought had a long history as hunter-gatherers, only to find that their ancestors were agriculturists who developed sophisticated city states—could very well be one of many examples. The Amazon discoveries may be the first examples of what could be a worldwide phenomenon. These findings, in addition to the myths and legends told by many ancient hunter-gatherers and agriculturists, could help to explain the sudden emergence of agriculture in some parts of the world. They would also help to explain the discoveries of domesticated cereals and other crops found in archeological sites with no wild progenitors and no signs of previous agricultural experimentation.

Does diet play a role in physical stature? Certainly. People who consume dairy products, for example, tend to be taller than nondairy eaters. However, this does not mean that dairy consumers are healthier than those who consume little to no dairy products. Today, people in Western countries are becoming taller but generally eat inferior foods, compared to those of traditional people. Increased stature today is often caused by foods laden with growth hormones and other hormonal stimulants, the result actually being a decline in health correlating with increased height! While diet may affect stature, peoples’ heights and hat sizes have little to do with robust health, high intelligence, and longevity.

Defining Traditional Diets

If the theories of human and cultural evolution are reasonably valid, we can accept the idea that our diet should consist of foods our ancestors gathered, hunted, and fished. But what do we really know about these ancestral diets, and how do we know what we know?

With little evidence from our prehistoric ancestors, except for some telltale signs from bones and stone tools, much of the information about traditional diets has been gleaned from studies of various present-day indigenous peoples, who continue to live primarily as their ancestors lived. However, some of today’s indigenous peoples, through the influence of other cultures over time, have added new food sources to their diets, creating a modified version of what their ancestors ate. The introduction of new culinary tastes and experiences by outside cultural influences has been a common practice throughout history and likely prehistory as well. Sometimes new foods have improved the health of the people. At other times, the change has contributed to their demise. Traditional peoples who have incorporated large quantities of modern refined flour, sugar, and processed foods into their diets have experienced a sharp decline in health during the last two centuries.

There are many different opinions on what constituted a pre-agricultural diet; most of them can easily find scientific evidence to back their theories. It has been said that our Paleolithic ancestors did not consume dairy products. According to evolutionary theory, it is assumed that dairy products are not incorporated until the onset of agriculture and animal husbandry during the Neolithic period. However, we are not certain that this is the case. A UPI Science News report challenges one such widely accepted notion about the onset of dairy consumption:
…traces of milk some 6,000 years old in Britain, the earliest direct evidence known of human dairy activities… “first direct evidence milk was consumed by humans in the early Neolithic, or Stone Age.”13

Although still fitting the accepted timeline for animal domestication, this report places the use of milk at a far earlier date than previously thought for Britain. People began herding animals earlier than 6,000 years ago in the Near East, but the idea of primitive herders being the only humans who had connections with animals—other than hunted prey and the domesticated dog—is an assumption based on a lack of evidence from anything earlier than about 12,000 years ago. Perhaps the use of animal’s milk, like grain domestication, extends much further back into prehistory than the incomplete facts and assumed theory suggest.

With more evidence supporting the existence of Paleolithic agriculture, it is reasonable to assume that animal domestication occurred at an earlier time as well. Could the extensive harvesting of wild grasses by some Paleolithic hunter-gatherers, as noted by researchers and scientists, have been feeding their domesticated livestock? This is highly plausible in that our fossil record for early animal domestication in the Near East includes bones from both wild and domestic sheep and goats. The morphological similarity of modern and ancient sheep and goat bones is so close that even under close examination they are often indistinguishable from each other. Like the hypothetical “Agricultural Revolution,” and the origin of grain domestication, perhaps it is also unwise and far too early in the game to confine animal domestication to an imaginary time period.

Ancestral Nutritional Problems

Due to at least 100,000 years of extreme climatic fluctuation from the late Pleistocene up through the Holocene, about 12,000 years ago, it is difficult to accurately compare the health of hunter-gatherers and agriculturists in antiquity. Both groups throughout history have experienced periods of abundance, scarcity, and famine, depending on the prevailing climate, geography, and resources. For example, the early Egyptians, like other past civilizations, faced times of war, famine, drought and other environmental problems throughout many generations since prehistory. For thousands of years, these people enjoyed a wholesome, varied diet abundant in both plant and animal products. Because of these facts, we cannot simply state, as some have done, that some human fossils revealing signs of ill health resulted from a diet high in grain and low in protein and fat. The analyses of a few, or even 100, mummies or other fossils of ancient peoples from around the world at various times of history is hardly enough to conclude—again, as some have claimed—that all ancient Egyptians or other agriculturists suffered from ill health for the last 10,000 years or more, as compared to Paleolithic hunter-gatherers.

Some historians suggest that because hunter-gatherers were mobile and thought to consist of groups smaller than 100, they were less susceptible to the diseases and health problems faced by agricultural civilizations. Small, moving groups tend not to pollute their water supply or attract rodents and insects, all strong contributors of disease in civilization. While this may be true to some degree, it is also known that hunter-gatherers at times endure famine and an insufficient supply of animal protein. When this “meat hunger” occurs, what little amount of meat is available typically becomes rationed. The males who hunt down the food are given priority, while women and children have to subside on whatever amount remains uneaten, if any remains at all. This lack of protein has been known to last anywhere from several days to weeks for those less fortunate tribe members. Some hunter-gatherers today show signs of malnutrition from protein deficiency.

The notion that “small and mobile is better” is placed into a more realistic perspective when we consider the knowledge of early agriculturalists. Like those of hunter-gatherers, sophisticated civilizations of early agriculturists also had natural medicines to help combat disease. Both groups had a working knowledge of the medicinal qualities of the plants and animals in their environment, knowledge that was passed down from generation to generation. Herbology, traditional Chinese and Ayurvedic medicines, and numerous other highly effective healing systems are contemporary examples of such wisdom that have been passed down through many generations.

Archeological records also suggest use of the natural antiviral, antibacterial qualities of herbs and spices by early agriculturists. The domestication of cats in 9,500 BP may have been an effort to control rodents in settled communities. Evidence is presently accumulating that pertains to severe health problems among some groups of ancient hunter-gatherers during various periods throughout history, not to mention the taboo subject of cannibalism. These new discoveries are challenging previously held beliefs, making it unreasonable to assume that hunter-gatherers possessed superior health over agricultural peoples.

In the present day, the health of some modern agriculturists practicing traditional farming methods far exceeds that of some modern hunter-gatherers; in other cases, the opposite is true. There is no reason the same would not hold true with our ancestors as well. Other studies show that some ancient agricultural peoples had remarkable bone densities and extraordinary life spans. In other words, civilization is not necessarily a disease-infested way of living. Our ancient agricultural ancestors developed ways of handling the many negatives of a largely populated, lesser-mobile lifestyle, just as nomadic hunter-gatherers have found ways to cope with their cyclical changes.

It is quite easy to gather scientific evidence for either the hunter-gatherers or agriculturalists in order to fit a particular dietary agenda, such as a low-carbohydrate or low-fat perspective. If we wanted to downplay the diets of the hunter-gatherers, we could emphasize the long history of cannibalism practiced routinely among some groups until only recently, as documented by anthropologist Marvin Harris14 and others.

Many hunter-gatherers have suffered (and still do today) from long-standing parasitic infections. Hunter-gatherers often feed in an area until it is depleted. Their lives can be marked by internal strife, short life expectancy, population control by infanticide, incest, rape, and violence from tribe to tribe. On the other hand, the lifestyle of some agricultural peoples has had its shortcomings as well, with dental caries, arthritis, the practice of genocide, epidemics, and numerous other problems. In fact, the two groups share so many characteristics that if we were to swap problems between the two groups, we would probably find that eventually they would both end up with the same problems they had before, albeit with slight variations.

Based on the evidence, what we can safely conclude is that physical and mental health problems occur in both groups of people when nutritional balance is adversely affected by external influences. Comparing the modern diets of both groups is impractical and misleading, because many hunter-gatherers today still maintain a natural diet largely similar to their ancestors’ diets, whereas most modern agricultural people maintain a diet based on artificial foods. And while it is helpful to understand the functions and behaviors of isolated nutrients in foods, the approach of modern nutritional science is severely lacking in the nutritional commonsense and wisdom of our ancestors—both of them.

In essence, researchers have not found a specific meat- or plant-based Paleolithic diet that represents an overall example that we could reasonably call “our ancestral diet.” Food choices vary considerably within both groups. Staples of insects and monkey brains, for examples, are daily fare for the hunting and gathering Mentawai tribe of Sumatra, while the agricultural Incan descendants living in the highlands of Peru find their sources of nutrition in cuy (a domesticated guinea pig) and cultivated quinoa.

Dr. Weston A. Price has a very balanced perspective on traditional peoples and their diets, pointing out the numerous health benefits of living a natural lifestyle through his studies of traditional peoples throughout the world. Although the diets of the groups of people he studied vary considerably due to climate, environment, and geographic location, Dr. Price is able to show how natural, unrefined foods contribute to robust health. He concludes that it is not a matter of whether or not a people practice agriculture; rather, it is what essential foods constitute a healthy diet. Among people with ample amounts of nutrient-dense foods, he finds better overall health, as contrasted with those lacking in sufficient amounts of these foods.15

In the past, as today, people throughout the world lived in widely varying conditions and circumstances. Today some people live in poverty and suffer from numerous nutritional deficiencies while others live affluent lives and still suffer from nutritional deficiencies. The most important difference between agricultural people of the past and people of the present is that the overwhelming majority of people of the present suffer from “environmental amnesia” and have lost their intimate connection with their natural surroundings, while many people of the past, whether rich or poor, maintained harmony with nature through their food and agricultural practices.

Diet Evolution

From the evolutionary perspective, man’s earliest primate ancestors ate a diet of fruits, nuts, leaves, roots and a small percentage of meat, not unlike modern-day chimpanzees and apes. These primate “ancestors” were said to have evolved to the point of being able to use stone tools about 2.5 million years ago. Stone tool usage represented the beginning of technology and led to an increase of meat and fat consumption in the form of small, easy-to-pursue animals.

This period of increased meat consumption coincided with an increase in brain size and is considered an important side effect of consuming nutrient-dense food in the form of animal fat and protein. However, the increase in brain size could also have been caused by the need to utilize the brain more in order to hunt prey. Increased brain size gave hominids a new branch among the primates on the ancestral tree 1.8 million to 500,000 years ago. Then, the hominid Homo erectus appeared with a fully functional brain that gave him the capacity to hunt big game. Large quantities of animal bones found at some archeological sites along with stone tools from this period confirmed regular consumption of prehistoric game animals.

While other hominid species appeared throughout these long periods, it was not until about 200,000 years ago that modern humans made their appearance. This period also coincided with the first evidence of cooking. By cooking their food, modern humans increased the available energy content of plant foods, especially complex carbohydrates (wild grasses, tubers, and roots), which, in combination with big game animals, supposedly contributed to another leap in the evolution of brain function.

After about 190,000 years of continuous hunting, fishing, gathering, scavenging, and cooking, modern humans began to farm. Hunter-gatherers presumably had hunted the mega fauna to extinction in some parts of the world. Lack of available prey then led to the need to farm the land.

Many hominid species in the evolutionary tree are not mentioned, and some of the dates vary among historians; what we have then is a basic outline of a widely accepted—though unproven—theory of diet evolution.

The Neanderthals

Although evolutionists do not believe humans evolved from chimpanzees or apes, they do believe that humans and chimps have a common ancestry. Characteristics associated with being human include loss of thick body hair, bipedal movement, tool- and weapon-making, use of fire, creation of clothing, and language development. Each is thought to have evolved along with our ancestors. Evolution, rather than representing a ladder leading upward in a straight line, is better understood as a tree with many hominid branches that include Homo erectus and Homo sapiens. All hominids, relatively, are “cousins” that somehow eventually culminated into modern humans through millions of years of mutations.

However, DNA analysis of Russian Neanderthal remains dating back 29,000 years reveals “that modern humans are not related to Neanderthals,”16 disproving conventional scientific opinion. Instead, Neanderthals represent a completely different species of hominid. Additional tests on Neanderthal remains found in a cave in Germany show the same results. Both studies imply that Neanderthals “…don’t have the diversity to encompass a modern human gene pool.”17

Another theory suggests that Neanderthals didn’t have the technological means to survive the increasingly harsh winters of the ice age. One has to wonder about this theory, when evidence for Neanderthal intelligence has been well documented. They buried their dead in specific astronomical directions, showed signs of artistic creativity, knew how to make fires, lived in caves and ate a diet consisting exclusively of meat. The same theory suggests that modern hairless Homo sapiens had what it took to survive because they could make throwing spears, fishing nets and fur clothing. It is difficult to believe that Neanderthals, with their developed brains, could not have figured out these basic survival skills as well. Still, however, the experts continue to debate the role of Neanderthals in their social and genetic relations to modern humans.

Guess Who’s Coming To Dinner?

What really caused the demise of the Neanderthals, wiping them from the fossil record around 30,000 years ago? Human bones found at some Neanderthal sites strongly indicate the regular practice of cannibalism. This should not surprise us, since our human history, right up into recent times includes numerous examples of cannibalism. Anthropologist Marvin Harris writes about the history of cannibalism. In Good To Eat Harris states: “When first contacted by Europeans, the peoples of New Guinea, northern Australia, and most of the islands of Melanesia such as the Solomon Islands, the New Hebrides, and New Caledonia practiced some degree of warfare cannibalism.”18 Later on, while discussing other issues related to diet and lifestyle; Harris makes a point to say that not all Polynesian islanders practiced cannibalism. “All three of the Polynesian groups that practiced warfare cannibalism also lacked the highly productive agriculture and fisheries which characterized the politically centralized Polynesian Islands.” A few agricultural peoples also practiced cannibalism, but cannibalism tended to predominate among peoples lacking centralized governments and the accompanying agricultural systems.

It is interesting to note that in the tribal lore of modern tribes who practiced cannibalism, there is a common belief that consuming another person, be it conquered warrior, relative, or other, endows the consumer with the powers or energies of the person consumed. The consumption of a powerful opponent, then, means more power and energy for the consumer. While this may sound barbaric or disgusting to our sensibilities, many primitive and civilized peoples perceived nature and food of any kind as energy.

There are a few recorded examples of cannibalism among agricultural peoples as well. One of these describes the Aztecs, who first sacrificed their victims to the gods before consuming them, with only the upper classes and priests allowed to partake of the ghastly feast.

While the reasons for consuming human flesh may differ between meat-hungry tribes and civilized cannibals, the basic shared belief of obtaining the strength and power of a competing rival may very well be the reason for the demise of the Neanderthals and other “robust and powerful” hominids in the past. With the onset of the ice age and competition for food between “modern humans” (hunter-gatherers) and Neanderthals, including the history of cannibalism among both, perhaps desperate times led to desperate acts. Perhaps humans outnumbered Neanderthals, or maybe early hunter-gatherers considered a powerful Neanderthal a prize meal, when they could capture one.

In light of new discoveries in anthropology and archeology, it is increasingly difficult to define the character of early human and other hominid species. Some evidence confirms the barbaric and primitive qualities that previously defined prehistoric cave men. Other evidence reveals a fully conscious and intelligent species, not unlike modern humans at their best.

A basic problem with the orthodox view of ancestral Paleolithic diets is that it is based on a series of assumed progressions from nonhuman primates that eventually culminate with modern humans, yet an actual link between these primates and humans cannot be shown to exist. This is also the case with Homo erectus and Neanderthal: there is no actual genetic link that proves modern humans actually evolved from any hominid species. Without such a link, diets of other species need not be a basis for human dietary practices, modern or ancient.

Even though some of these hominids used fire, stone tools, and competed with modern humans in hunting, the simple truth is that ancient humans (hunter-gatherers and agriculturalists) are the only true ancestral examples we have with which to accurately assess human dietary history. It is difficult to say if diet had any influence in making us human; however, it definitely played a major role in the establishment of civilization and human development. And diet may very well be the most distinguishing factor between our hunter-gatherer and agricultural ancestors.

The Hunter-Gatherers

Some historians think that prehistoric humans were parasites of the land because they would diminish both plant and animal resources before moving on to their next habitat. However, new discoveries are being made almost daily that shed further light on the lives of prehistoric peoples, including Neanderthals, revealing extraordinary abilities as exhibited in their astronomy, art, pottery, and textiles. Evidence of mummification indicates knowledge of preservation and elaborate burial rituals. Up until recently, we have associated all these activities with civilization, not with primitive hunter-gatherers. Is it possible to have all these earmarks of civilization and no agriculture? Why did these prehistoric hunter-gatherers take so long to cultivate plants and domesticate animals? Some experts of cultural evolution now say that hunter-gatherers have, to some extent, been using agricultural techniques all along.

If what these new findings suggest is true, then we have numerous examples of prehistoric hunter-gatherers that were advanced in some areas that most modern hunter-gatherers have yet to reach. For modern hunter-gatherers, little has changed and their lifestyle appears much the same as that of their ancient ancestors. It seems as though evolution has ceased for them, and that their culture remains confined to the simple tools and materials necessary to survive the elements.

The fact that examples still exist in parts of the world is remarkable, considering the extremely long time spans attributed to the evolutionary process and the intrusions into their territories by civilizations throughout history. How much longer hunter-gatherers will be able to continue living as they do remains to be seen. Will they disappear, like Neanderthal and Cro-Magnon? Both of these prehistoric peoples were supposedly more robust than agricultural peoples and they both had hunter-gatherer lifestyles and diets. How did the comparatively frail human with little body hair, survivors of an ice age and the crowning achievement of hominid evolution, come to outlive the other Homo specimens? The answer we have been taught to accept is based on the limited view that human culture developed in a mechanical way through the use of stone tools and other material basics. Moreover, adherents of the cultural evolution theory inaccurately assume that current hunter-gatherers represent the only living examples of what humans were like before 10,000 years ago!

The hunter-gatherers have never lost their original primal instincts for survival. Today there are people from this group who know of agriculture but prefer to maintain their nomadic lifestyles. Tens of thousands of years of hunting would naturally hone one’s skills, especially when confronted with predators competing for food. Without the speed or agility of the lion, for example, but knowing what, when, and where a lion hunts is something these ancestors would learn at an early age. Having a close tie to the environment also allows the hunter-gatherer to observe animals’ relationships to plants and develop various uses for those plants through continuous sampling. This, in turn, leads to the development of natural medicines. Knowledge of plants and animals is a commonly recognized skill of hunter-gatherers and it is this close relationship with nature that modern-day hunter-gatherers share with their ancient ancestors. Nature has been and remains their teacher.

The Paleolithic Diet Riddle

The following statements are some of the most commonly expressed opinions on traditional pre-agricultural diets by experts in the fields of paleontology and anthropology:

A. Early Paleolithic man had a diet much like other forest dwelling primates. This would be similar to what chimpanzees eat today and includes mostly fruits, some other plants, and small amounts of insects and rodents.

B. The diet of the early hunter-gatherers consisted of about 80 percent gathered foods, including shellfish, eggs, plants, and about 20 percent meat.

C. Man was a gatherer-scavenger; his diet consisted of mostly wild roots and other plants, occasionally supplemented by small amounts of scavenged animal flesh left by predatory animals or taken from them when possible.

D. Early man’s original diet was based on fish, seafood, and other marine life until he was forced inland from coastal regions by rising sea levels, where he learned how to hunt game.

E. The diet of early man was mostly meat, up to 80 percent or more, derived from hunting game.

If we were to create a multiple-choice question asking which of the above was the diet of our Paleolithic ancestors, the answer would have to be “all of the above.” The reason for such diversity in opinions on early ancestral diets is because evidence has been found to support them all. Naturally, as more evidence is accumulated, the theories get revised and updated, but current evidence shows there are a variety of regional diets for the early and modern hunter-gatherers. Based on this evidence, some of these early diets were healthy and supplied more than adequate nutrition for people, while other diets did not. Obviously, lush, tropical, coastal environs could provide a healthy diet of abundant plant foods with moderate amounts of animal products, whereas arctic dwellers would derive most of their nutrition from mammals and seafood and with less plant foods.

Those who toe the party line of cultural evolution persist in the belief that hunter-gatherers and agricultural civilizations did not exist side by side in prehistory. They also often glamorize hunter-gatherers as being taller and with superior health as a result of their nomadic diet, as compared to the much “later” agricultural peoples.

Aside from the fact that some of the evidence used to compare these two categories of peoples is from the remains of nonhuman primates, there is still no consensus on a particular diet for our hunter-gatherer ancestors, nor is there likely to be one. Not only that, it is simply untrue that health declined with the introduction of agriculture. Agriculture and animal domestication alone had nothing to do with the decline of health in humans. There are numerous examples where agriculture improved the health of people by introducing a wider variety of nutritional resources and reliable sources of protein foods.

How long humans have been in a state of declining health cannot be accurately determined through the study of fossil evidence alone. Many as yet unknown factors continue to haunt our past. What can be determined are the observable results of the last 200 years of environmental destruction from chemical agriculture and processed foods on the health of humanity. It is true that some anthropologists have supported the claims of “pre-agriculture health superiority,” but it is just as true that not all anthropologists and experts in the field agree with each other. Harvard anthropologist Ofer Bar-Yosef writes: “Natufian skeletons of the Levant represent robust and healthy individuals.”19 From this same source, we have:

A number of seminar participants (Keeley and Bar-Yosef, among others) did suggest that the most extreme forms of population pressure leading to skeletal pathologies would be unlikely to be related to domestication….20

Several diet gurus have suggested that grain domestication was the deciding factor in the decline of health after hunter-gatherers “transitioned” to agriculture, and therefore, that grains (carbohydrates) are best reduced or eliminated for optimum health. It has even been suggested that the “decline in health” trend from grain eating 10,000 years ago has only, in the last 100 years, began to reverse toward improved health. This is an interesting concept, one that implies that all the chemical agriculture and processed foods we have been eating for the last 100 years has actually helped to improve the declining health of the human species, a decline caused by grain eating. This is not only absurd it is irrational.

This idea is further rationalized by claims that our hunter-gatherer ancestors did not eat grain and were healthier because of it. This is also untrue: it is now well known among anthropologists and archeologists that many early pre-agricultural peoples harvested large stands of wild grains, and it is believed that these grasses were a regular staple in prehistoric hunter-gatherer diets. Today we know that these wild grains are suitable only for grazing animals; the domesticated versions are the ones with high nutritional content and when properly prepared, are an important source of human nutrition. Nevertheless, it is still believed that these large stands of harvested wild grains played a substantial part in many Paleolithic diets, particularly those of the Near East.

It has also been proven that when sustainable agriculture is practiced, including biodiversity along with other ecologically sound methods of animal husbandry, the people thrive. These were the original methods of agriculture, some of which are still practiced today by the Queche Maya and the Incan descendents in the highlands of Peru and elsewhere among modern-day, natural agriculturists. Biodiversity in agriculture and animal husbandry can provide additional varieties of nutritious foods that improve health.

Ancient Eco-Agriculture

The understanding of cosmic cycles was a common theme among many ancient peoples, as were biodiversity and other methods of sustainable agriculture. Because of this, the food produced and consumed by these people living in harmony with nature would have been of a much higher quality (and thus nutritionally superior) to the highly processed food of mono-agriculture systems today.

Our urban populations are nourished mainly on imitation foods devoid of health-promoting properties. We call this “progress” and rationalize it by the needs of a rapidly expanding population, yet the “progressive” methods used in chemical agribusiness to improve on nature are shortsighted and have led to severe degradation of the planet’s natural resources. Civilizations of antiquity used everything that was natural, in stark contrast to today’s world, where we create toxic plastics, sheet rock, and synthetic clothing materials, thereby creating disharmony in our environment and ourselves.

The great civilizations of antiquity were agricultural and pastoral. Most of these civilizations used a wide variety of foods for daily consumption when environmental conditions were stable. The most common dietary links to all of the great civilizations of the world were found in their basic choices of foods and how they were used. All grew an abundance of plant foods and raised various animals with which to prepare their meals.

Archeological evidence suggests that ancient peoples knew the importance of biodiversity and the nutritional balance of proteins, fats, and carbohydrates. Ethnobotanist Edgar Anderson explained that there are ample instances from South and Central America that show that both ancient and modern agricultural peoples had individual gardens, which included vegetables, herbs, a bee yard, a fruit orchard, a dump heap, a compost heap, and a few domestic animals for food. Plants were isolated from each other by intervening vegetation so that pests and diseases could not spread from plant to plant and everything was conserved. Even mature plants were buried between the rows when their usefulness was over.21

Evidence for worldwide trade of foods and other goods among ancient civilizations is also increasing. These trade routes increased the varieties of foods and affected more than only agricultural peoples: many newly introduced foods adapted to semi-wild states in forests and jungles and are now regularly consumed by modern-day hunter-gatherers. One striking example may be the many varieties of medicinal plants, fruits and other foods found in the Amazon jungle. Current evidence strongly suggests that this vast jungle was once engineered and occupied by agriculturists. One wonders how many of the useful plants and animals currently found there were introduced thousands of years ago by early agriculturists.

The Mayan peasants in the Chiapas region of Mexico are often considered “unproductive” by large agricultural companies because they produce only around two tons of corn per acre, but the other foods produced through natural farming methods on that same acre can amount to as much as 20 tons. It has also been calculated that their farm incomes would be reduced by a factor of three if they didn’t use biodiverse methods of farming. In Thailand, a home garden can contain up to 230 species of plants. African home gardens often include 50 species of trees with edible leaves, and while Nigerian home gardens comprise only two percent of total Nigerian farmland, not too long ago these individual home gardens produced almost half of the agricultural output of Nigeria by using the same natural methods as the Thai and other traditional cultures.

Many ancient agriculturalists not only concerned themselves with ecology, they also developed brilliant uses of what today would be considered useless land for farming. China, Peru, and Mexico made use of steep, rocky hillsides through a method of farming called terraced agriculture. Mountain streams were diverted to irrigate layer upon layer of terraces, sometimes extending thousands of feet above sea level. These terraces produced (and still do today) large quantities and varieties of grains, beans, and vegetables. The Aztecs of Mexico created floating gardens in the swampy areas of Lake Texcoco by piling rich earth from the lake bottom onto rafts made of weeds. These raft gardens would eventually be anchored to the lake bottom by the roots of the plants and trees planted on them. Large quantities of food were produced on these island gardens, all without chemicals or harm to the environment.

While the ecological crises of today are caused primarily by the environmentally devastating use of monoculture and other unsound farming methods from industrial agribusiness, many of the ecological problems of the past were caused largely by environmental factors. Granted, there is ample evidence for agricultural devastation in ancient history. One example is the slash-and-burn method practiced by some agriculturists and hunter-gatherers. Environmental destruction caused by human need for sustenance was not uncommon with both groups of people in varying degrees. However, these methods were not the only ones used by these groups in the past or the present. Using our modern, environmentally destructive agricultural methods as a basis for comparison to all ancient hunter-gatherers or agriculturists is extremely inaccurate and does not take into account the sustainable methods of growing food practiced by many of our ancient agricultural ancestors, who were able to nourish large urban populations of hundreds of thousands of people with natural, whole foods.

The fact that the ancients cooperated successfully with nature while possessing advanced technology is cause for deep reflection. Defining life through the science of nature in large urban civilizations, an accomplishment unknown to 21st-century Homo sapiens, does not mean conflict will not arise from external influences or even internal strife. But think how today, with less effective methods, we live with the problems of hazardous wastes, poor food distribution for current population needs, and the threat of global warming.

Many traditional peoples throughout the world still practice the old ways of agriculture and their land has long been producing and thriving. Unlike the denaturing processes that we use on our food, ancient food technology included natural processing methods of pressing, grinding, fermenting, salting, smoking, and other storage methods still used today in many parts of the world by traditional peoples. The ancients chose, grew, and harvested their foods according to nature’s cycles. They adapted to tastes through natural preparation methods. They wisely planned their waste management, drainage canals, and food production—right down to what ended up on the table. Food was a very important part of their daily lives. It played an important role in all scientific and religious beliefs and was treated with reverence and respect. For our own health and that of future generations, it is imperative that we integrate similar methods of cultivation on a global scale.

Who Do We Think We Are?

Historians to describe our ancient Homo sapien ancestors who appeared on the evolutionary scene around 200,000 years ago often use the term “modern humans”. This is a generally accepted timeframe for when humans resembling those of today began their long, gradual path toward civilization. However, it is often suggested that primitive hominids were well on their evolutionary path to becoming modern humans as long ago as 500,000 years ago. The theory that nonhuman hominids evolved through mutations into Homo sapiens is based on scanty fossil records and is steeped in controversy; what is indisputable, however, is that nonhuman hominids did coexist with the earliest humans and their alleged relatives, Homo erectus, Neanderthal, and Cro-Magnon.

As mentioned in the introduction, two cultural groups of modern humans are recognized as having existed within the last 200,000 years. The first group, who according to most anthropologists are the first and only cultural examples of Paleolithic humans, are hunter-gatherers. The second group represents agricultural and pastoral peoples who supplement their cultivated foods with hunted and gathered foods from the wild. These two groups of people represent the only non-ape, non-monkey specimens of hominids alive today, with the exception of the elusive giant hominids, about which we know little, who live in the deep forests of the Pacific Northwest, inaccessible mountain regions, and a few other remote parts of the world.

Hunter-gatherers and agriculturists while anatomically alike, differ in their cultures and dietary traditions. Agriculturists are thought to have evolved from hunter-gatherers about 10,000 years ago when the domestication of plants and animals are thought to have first begun. Recent discoveries suggest that this “guesstimated” evolutionary timeline is way off the mark and that agricultural peoples have existed along with hunter-gatherers for a much longer period than the 10,000 years allotted. A print of a shoe sole has been found in Triassic rock in Nevada dating from 213 to 248 million years ago. Another example reveals human footprints—not those of an ape or missing link—preserved alongside those of a dinosaur.22 These examples have yet to be challenged effectively by any Paleo-scientist or anthropologist.

What are we to make of this unusual evidence? Is it too much of a stretch to suggest that these particular anomalies were isolated occurrences made by a time traveler from the future? Could these tracks be from highly evolved humans who coexisted with cave men and dinosaurs? What if the methods used to date these tracks are highly inaccurate, and the tracks are actually from a more recent period, say, 10,000 to 15,000 years ago? If so, then dinosaurs may not be as old as we thought they were. The possibilities for explanation are endless, yet there are too many examples like these to ignore them, and addressing these anomalies by denial or by discrediting the individuals who bring them to our attention does little to further our understanding. For numerous examples of evidence pertaining to traces of human existence in prehistory, I strongly suggest reading Forbidden Archeology by Cremo and Thompson.23

Did a few small bands of hunter-gatherers from different parts of the world evolve beyond all other hominids and convert to farming a mere 10,000 years ago? A hypothesis held by the orthodox view is that agriculture was established by small bands of hunter-gathers who had depleted their regional food supplies. These people then introduced it to others, and it spread. Meanwhile, in about seven other parts of the world, similar situations occurred.

Many historians hold to the “small bands of hunter-gatherers” part of this theory but also believe there would have been ample supplies of game and other resources in the regions where agriculture began when it did, and therefore suggest that there would not necessarily have been a need for agriculture. Why, with all those resources available, would they have then turned to farming?

A theory based on evidence from Scandinavia suggests that there was colonization by other people. It is suggested that these colonists were other bands of hunter-gatherers who might have been advanced in their own proto-agricultural experience and experiments. These groups could have practiced some seed planting or basic harvesting techniques. Another assumption is that the spread of agriculture occurred with an increase in the populations of the original agricultural groups. Once agriculture was established, we learn, it led to increased fertility, which in turn created an increase in population and greater dependency on agriculture.

Were the majority of Paleolithic peoples so well supplied with edible flora and fauna that they didn’t need to convert to agriculture 10,000 years ago? Or is it possible that the road to agriculture was not a slow, gradual evolutionary process instigated by a few groups of imaginative hunter-gatherers after all? The genesis of agriculture is still highly disputed by many historians.

Education as a Factor of Civilization

As an animal, man is the most plastic, the most adaptable and the most educable of all living creatures. Indeed, the single trait that alone is sufficient to distinguish man from all other creatures is the quality of educability—it is the species character of Homo sapiens, according to British-American anthropologist Ashley Montague. The differences between human and other primate intelligence are that humans possess discernment, vision, and determination when faced with life’s challenges. These qualities enable us to creatively work out problems by using our brains beyond the basic level of primate instinct. Using fire and creating clothing are two basic examples of human ingenuity that apes and chimpanzees have not achieved.

Based on current research, the earliest humans and Neanderthals shared these fully developed brain qualities, as did Cro-Magnon, though until recently they were considered illiterate primitives with still much to learn. For some unknown reason, Neanderthal and Cro-Magnon would not survive long enough to utilize their brain capacity to the extent of modern humans.

We are told that Homo sapiens evolved “because they had reasons to evolve.” These reasons are generally presented as a series of accidents, happenstance, mutations, and various other ways of describing how we developed from primitive to civilized people. Were language, writing, agriculture, metallurgy, and all the other earmarks of civilization really accidental discoveries or sudden brainstorms by evolving and insightful cave men and women?

For that matter, if the world was populated with primitive cave people with fully developed brains, why is it that only a small percentage of them evolved to the point of agricultural awareness in only a few places in the world? Why didn’t the rest of the world’s populations evolve as well? Many groups of hunter-gatherers never evolved at all beyond their original states, and this was not due to being isolated from those who had so evolved. One would think that after a good 500,000 years (or at least 200,000 years) of hunting, gathering and scavenging, all modern humans would have finally evolved from simple stone tool usage to a more settled agricultural lifestyle, especially if we are to believe that nutrition played a role in this development and that agricultural awareness and civilization in general was a process of evolution.

Research has shown that both apes and chimpanzees can learn how to paint pictures and communicate with sign language; these are two things they wouldn’t do in their natural habitat, yet clearly they do have the necessary intelligence to learn these things. If prehistoric hunter-gatherers had all the necessary intelligence to evolve to the point of modern civilization, why did it take almost 500,000 years for them to do so?

There are several theories that suggest answers to this question. One is that there was no reason to evolve from a leisurely life of hunting and gathering with abundant available resources. Another reason is that early humans evolved in stages and developed only what was necessary for daily living. For example, stone tools were essential for hunting and butchering animals; eventually pottery was needed to hold and transport water, so someone somehow came up with the idea of pottery and the use of ceramics…and so forth, right up through the invention of smelting copper and iron.

Let’s go back to the apes and chimps for a moment. If they had the intellectual capacity to learn how to paint pictures and communicate with sign language for millions of years, why didn’t they eventually evolve to do it naturally? The answer “no need to” would certainly make sense, because they have no need to do so now, either. Nevertheless, let’s see where this reasoning leads. Given that humans have had the intellectual capacity to evolve at a rapid rate, create civilizations, and practice agriculture for half a million years yet didn’t, can we use the same reasoning to say that we had no need to create civilizations while we still existed in small, communal bands of hunter-gatherers? And if this is true, then why did a few select groups of people throughout the world “evolve” to create agriculture and civilizations, while most did not evolve at all, relatively speaking?

Nutritional differences, hunting practices, reproduction and population pressures, settlements, the use of stone tools, environmental changes, social pressures… some historians say these are responsible for the advancement of civilization. These factors are doubtless critical aspects that are associated with and contribute to civilization—but what actually motivated certain humans to literally leap forward while others remained as they were? We have already discussed the issue of ample wild food resources in areas where agriculture began, so there doesn’t seem to be a need to learn how to domesticate crops and domesticate animals or develop civilizations. In the development of civilization, however, education is intertwined within the social fabric of human relationships.

Illiteracy is a common problem in our modern world. We are faced with the unfortunate situation where some children do not have educational resources. Other children and adults are often faced with educational challenges for different reasons. And while proper nutrition plays a major role in brain and nervous system health, many uneducated children and adults simply lack the guidance from qualified teachers to lift them from their ignorance to a point where they can read and write. It is likely that most humans, were they raised without teachers and education, would remain illiterate their whole lives. There are many unfortunate examples of people living this way throughout the world.

In other words, humans from an early age need teachers, experienced educators who often become role models, leaders, and guides in order for civilization to progress. Some people are easy to teach, while others represent more of a challenge and require extra time and patience. Our traditional sources (oral, written, and legendary) of history and prehistory suggest that many of our primitive ancestors were taught the basics of agriculture and civilization by experienced teachers. Perhaps this was an experiment, similar to modern educators teaching chimpanzees how to paint and communicate today.

Even though many prehistoric hunter-gatherers did not advance to the point of civilization, it doesn’t mean they didn’t continue to learn from each other and whomever and whatever they encountered in life. For them, much of their education and learning experience came from the absorption and assimilation of the natural world through the direction of a shaman, medicine man, or other spiritual teacher. This form of leadership is common among modern bands of hunter-gatherers and may have been part of many Paleolithic groups as well. However, sophisticated agricultural practices and other earmarks of civilization are not a part of their education. This would tend to validate the evidence of our traditional sources, which suggest that civilization and agriculture were handed down from qualified teachers to selected peoples.

On the other hand, paleoanthropologists and archeologists are persistent in hanging onto the human and cultural evolutionary model when explaining the beginnings of pre-agricultural peoples. Orthodox scientists disregard the oral traditions of many modern hunter-gatherers that claim origins from ancient, agriculturally based civilizations destroyed by cataclysmic events.

Similar stories of this type of ancient heritage can be found around the world today among all types of traditional peoples. Gradually more kernels of truth are emerging from these legendary stories as new discoveries are being made, corroborating the idea that the great civilizations of the past were fully formed and show little signs of an evolutionary process from the onset.

We know that many of our ancient ancestors left either oral or written traditions that describe their lives. We don’t know how ancient civilizations originated, and so we blindly accept the conventional theories as answers. The decomposition of natural materials, used by our agricultural ancestors may be part of the reason why we find so few traces of their legacy before 10,000 years ago. Furthermore, if conventional scientific paradigms fail to acknowledge the evidence for the advanced civilizations that existed in prehistory, how can scientists claim to know what occurred millions of years ago in Precambrian and Cambrian times? The truth is that we really do not know with any certainty, as available evidence can only take us back a few thousand years.

Unlike with hunter-gatherers, who are mainly content with a lifestyle similar to that of their ancestors, the agricultural lifestyle and diet have changed considerably within the last few hundred years. Most urban and rural humans are now so far removed from the precepts of their ancestral heritage; it appears the stage has been set for an ultimate showdown between man and nature. Greed, arrogance, and strife have become the distinguishing characteristics of 21st-century Homo sapiens. Refined foods with their artificial additives contribute to unprecedented health problems, and forced growing methods have adversely affected agricultural conditions and ecological balance throughout the planet.

Meanwhile, we as a species have become addicted to constant entertainment and other distractions that suppress creativity, spiritual awareness and the ability to think for ourselves. While we may have prevailed through our intelligence as the majority in developing certain technologies, most us have become oblivious to rational thought. How do we reverse this destructive “cultural” trend? Could we adopt a hunter-gatherer diet and lifestyle and live a life devoid of technology and distraction? No, we have come too far for that and it wouldn’t work anyway, as this lifestyle could not support our current population. We would very quickly decimate our food supply. Not only that, the hunter-gatherer lifestyle is not conducive to progress and intellectual advancement, as has been shown by more than 200,000 years of continuous repetition around a stone tool technology.

While agricultural technology has not been properly focused, this is not to say that science and technology haven’t helped us better understand the elemental nutrients in food. Rather, it is that every attempt to improve real, natural food has been unsuccessful. Today’s nutritional science lacks traditional wisdom and fails to acknowledge the importance of food quality; its agenda is geared to support large food corporations that have little interest in health. With profit-only goals, these corporations buy science in order to support the kind of technology that creates artificial and highly processed foods, emptied of the ingredients essential to sustain our species. Homogenization, pasteurization, growth hormones, steroids, synthetic vitamins, preservatives, and genetic modifications are some of the facets of modern technology that have failed to improve the food our agricultural ancestors brought to the table.

In every instance where modern, Western culture-bearers have introduced their food, which indigenous peoples know as “white man’s food,” the people’s health has declined. By contrast, when our ancient ancestor culture bearers introduced new foods and farming methods to other cultures throughout the world, the recipients thrived. To this day, the peoples who use traditional foods and natural agricultural practices do well. It stands to reason that these simple, basic ancestral foods of traditional agriculturists must be the proper nutrition for modern civilized humans as well.

One would think that with all our scientific knowledge, we would have improved on the more than 10,000 years of traditional dietary practices—but we haven’t. Perhaps this is because traditional foods cannot be improved.

Ancient China and India are two examples of traditional cultures with a very sophisticated understanding and long history of food as nourishment and food as medicine. In many ways, their holistic food science is far superior to our modern, left-brain methods of nutrition analysis. Modern nutritional science has done little to solve the problems of malnutrition and obesity, either in the “developed” nations or “third world” populations. What it has done is to contribute greatly to the hundreds of extreme and absurd dietary fads so prevalent today, which in turn have done little more than create greater confusion and ill health.

One can imagine future residents of planet Earth 20,000 or more years from now, uncovering traces of artificial buildings and other examples of today’s civilizations—perhaps an intact package of preservative-laden pastry: What will they think? Will they recognize a connection with this evidence and the demise of civilization?

The Interconnection of Science and Religion

All was not peace and love with the ancients. Indeed, human history as we know it was not exempt from periods of violence and warfare. However, some sort of paradisiacal “golden age” is mentioned in nearly every ancient culture. Architecture of past civilizations reflected nature’s designs and used materials from the surrounding landscapes. Ancient peoples were aware that the frequencies of energy in nature flowed in constant, recurring cycles. Ancient civilization itself synchronized with the natural world to the extent that religion was their science—and science their religion.

Our ancestors equated nature with the divine in all they did and had a deep understanding of the interconnectedness of physical and spiritual worlds. This unique unification between science and religion extended throughout the ancient world and distinguished the great civilizations of the past from those of the present. While ancient beliefs are often confused and misrepresented by orthodox views as “ritualistic cults,” ancient peoples had a working concept of the soul and its purpose in the universe. Some of their ideas, though altered over time to fit the changing religious and political climate, were filtered down to us as days of fasting and other “holy days.”

What conclusions can we say we have reached through research pertaining to our hunter-gatherer and agricultural ancestors and their traditions? Try as they may, researchers cannot define either group as having consistent qualities within their respective cultures. Let’s summarize what we do know about some of their known cultural qualities and see what new ideas we have gained.

1. Ancestral Lineage. Long-accepted beliefs about prehistoric humans are changing as both groups turn out to be more advanced than previously depicted. New discoveries challenge the dating for agricultural origins and show there is little evidence to support the theory of cultural evolution from hunter-gatherers to agriculturists.

2. Art, Religion, and Ritual. The myths and legends of both groups of peoples reveal the great depth and broad scope of understanding of nature that extends well beyond the basic survival concepts of primitive culture. Rituals, once interpreted as primitive rites of passage, idol worship, and religious superstition, are now coming to be regarded as exercises based on profound wisdom. While peaceful, spiritual lifestyles may have predominated at some time, regular instances of cannibalism, incest, and other practices unacceptable to modern civilizations occurred among both hunter-gatherers and agriculturists. Links between different ancient cultures and tribes are being discovered that reveal cross-cultural communication over vast distances. These cross-cultural links are supported by archeological evidence and further confirmed by universal legends of culture bearers and global catastrophes. New discoveries are shedding light on these issues, again causing us to reevaluate past interpretations about cultural evolution.

3. Diet and Health. Neither hunter-gatherers nor agriculturists can be pigeonholed into a specific dietary category. Diets vary among both modern and ancient groups, with a wide range of foods depending on environment and lifestyle. Both groups have experienced examples of reverting to the other as a result of environmental changes: drought, famine, floods, and depletion of resources have resulted in agriculturists reverting to hunter-gatherer lifestyles and vice versa. Both groups include examples where their diets are nutritionally sound and healthy, and other examples where their diets have been inadequate, leading to deficiencies and health problems. Many of the positive health aspects both groups have experienced have been largely due to their close interaction with and exposure to nature. This type of lifestyle, along with high quality foods, enhances endurance, immunity, and overall strength.

Today we have disrespectfully distanced ourselves from the natural world, to our own detriment. We are well aware of the repercussions of living against nature, which include rampant disease, environmental devastation, and a pervasive sense of psychological, emotional and spiritual alienation that lead to a host of societal ills, including profoundly criminal and self-destructive behaviors. Perhaps this is why some few individuals and enclaves of people have chosen lives of celibacy and meditation in isolated environments, where they can reconnect with the knowledge of the ancients, the source of wisdom.

To know that our ancient ancestors were so highly evolved, both scientifically and spiritually, is an inspiration for many of us—and hopefully one that will motivate us to seek our species’ continuance in a more sublime expression than our present path would suggest.

Here we are, against evolutionary odds, living at a fraction of the cultural potential expressed by the wisdom of civilizations past. It would seem that we are treading a delicate balance. If we begin to fill in some of the missing pieces of our heritage with what we know to be true, we can undoubtedly rebuild our planet’s ecosystem, ensure our longevity, and fulfill our cultural destiny.

References

1. A familiar term used in place of “hunter-gatherer-scavengers,” the more correct term due to a recent scientific update.
2. Harlan, Jack R.; The Living Fields; Cambridge University Press, Cambridge, UK; 1995, p. 89, [url=http://www.cup.org]www.cup.org[/url]
3. Associated Press; December 18, 2003, [url=http://www.startribune.com]www.startribune.com[/url]
4. “Ancient Artists,” The Boston Globe; December 23, 2003.
5. Guilin Xinhuanet; December 23, 2003.
6. Lhote, Henri; The Search for the Tassili Frescoes: The Story of the Prehistoric Rock-Paintings of the Sahara; Hutchinson Publishers; 1973 ASIN: 0091123801.
7. “Ancient Amazon Settlements Uncovered;” Seattle Times; The Associated Press (Washington); September 18, 2003.
8. “Amazonian Find Stuns Researchers,” ibid, September 20, 2003.
9. Harlan, Jack R.; cf. ante, p. 179.
10. Hancock, Graham; Fingerprints of the Gods; Three Rivers Press, New York, NY; 1995, [url=http://www.randomhouse.com]www.randomhouse.com[/url]
11. [url=http://www.westonaprice.org/nutrition_greats/price.html]www.westonaprice.org/nutrition_greats/price.html[/url] May 24, 2004.
12. Price, Weston A.; Nutrition and Physical Degeneration; The Price-Pottinger Nutrition Foundation Inc., La Mesa, CA; 2000 ISBN 0-87983-816-7.
13. UPI Science News; January 27, 2003.
14. Harris, Marvin; Good to Eat, Waveland Press, Inc., Prospect Heights, IL; 1985.
15. Price, Weston A; cf. ante.
16. Viegas, Jennifer; “Study: Human DNA Neanderthal-Free,” Discovery News; May 12, 2003, [url=http://dc.discovery.com/news/briefs/20030512/neanderthal.html]http://dc.discovery.com/news/briefs/20030512/neanderthal.html[/url]
17. Reuters; March 27, 2000.
18. Harris, Marvin; cf. ante, p. 225.
19. Hayden, Brian; “A New Overview of Domestication,” Last Hunters—First Farmers (Price & Gebauer); School of American Research, Santa Fe, NM; 1995, p. 280.
20. Loc. cit.
21. Anderson, Edgar; Plants, Man, and Life; University of California Press; 1967, ISBN 0-52000-021-8.
22. Mysteries of the Unexplained, pp, 36–38; published by Readers Digest (1982), ISBN 0-89577-146-2.
23. Cremo, Michael A. & Thompson, Richard L.; Forbidden Archeology: The Hidden History of the Human Race; Torchlight Publishing; 1998 ISBN: 0892132949

Choosing a Healthy Diet

September 13, 2004

What is Healthy Food?

Just what is “healthy food”? Ideally speaking, healthy food can be defined as natural foods: plant foods that are grown either organically or biodynamically, and animals that are pastured and raised naturally. Natural foods that have been processed and prepared by traditional methods, including fermenting, marinating, drying, etc., also are healthy foods.

Healthy food does not include foods that have been grown, raised or processed through genetic experimentation, chemicals, preservatives, hormones and other unhealthy measures adopted by corporate agribusiness conglomerates. These methods of raising food, while able to sustain the human species to some degree, do not even warrant argument against natural methods of food production, because history, science, tradition, and common sense have already set the record straight on this issue.

Every now and then, we find ourselves wondering about the wisdom and healthfulness of modern food production and their marketing spin, but most of the time the “science” goes unquestioned by an unwitting and obedient public. But we are right to wonder. Profit-driven production methods and the pseudo-science that backs their dubious achievements with spurious claims is commonplace in today’s market—and the more we educate ourselves about these practices, the better off both we and our future generations will be.

The healthiest foods for all human beings are those that are raised and grown through natural methods and time-honored traditions.

One might argue that natural foods too are exposed to environmental toxins on a regular basis, or that organic guidelines can sometimes be non-specific, or that the water used in the growing process of naturally raised foods may contain toxins. In many places, this is true; unfortunately, that is the state of the planet, at this point. But adding additional chemicals, preservatives, growth hormones and the rest just compounds the problem. Naturally grown foods in a semi-unnatural environment are still superior to those grown in that same environment with all the added toxic ingredients. Therefore, for the best food capable of supporting health, as much as possible and as much as is realistic for you, consume organic and biodynamic foods.

Traditional Differences: The Real and The Make-Believe

One of the best criteria for determining whether or not a food is natural is to use an historical perspective; where or how a food shows up historically or in a traditional context helps to clarify the issue. The further back in history a food goes can determine and define how effective and useful the food has been, especially if the food is still in use today after thousands of years.

Even in this context, we will often find some natural processing methods involved with certain foods. Grains, for example, have been processed by soaking, fermentation and grinding to make a wide variety of porridges, breads and noodles for thousands of years throughout the world. This form of processing does not involve the use of chemical preservatives, however, and is perfectly natural and healthy. Another example of natural processing occurs with fermented vegetables. Pickling is a traditional and natural method of food processing that offers numerous health benefits.

Today, however, these and other traditional methods of food preparation are artificially mimicked in the commercial food industry, where inferior products manufactured with chemicals and preservatives are the norm.

Substituting the Substitutes

Well-known plant- and animal-derived commercial food products crowd the aisles and overstuff the shopping carts of today’s consumers. Some of the most widely consumed groups of processed foods are cold cuts, hot dogs and processed cheeses. While often containing the by-products of actual foods, these products essentially are highly processed food “substitutes.”

The same also goes for the substitute foods some people call “modern natural foods.” These “natural” versions of substitute foods are those products found in natural food stores that are designed to look and taste like substitute food products found in commercial grocery stores. Items such as soy “tofu” hot dogs, fake bacon, soy sausage and soy cheeses are food products, and not actually foods per se: highly-processed products made from by-products of soybeans, mostly soy isolates, that offer little to no health benefits. Long used as filler for pet foods, these fake food ingredients, usually processed with chemical solvents and other unhealthy processing methods, are now used to create ”natural” food substitutes that resemble the commercial food substitutes that have become so familiar to most of us who were raised in Western cultures.

One has to question why someone changing to a healthy diet would want to reproduce the non-food items of his past with more “natural” versions of non-food items to begin with. The original processed animal products do not offer the natural nutrition of real naturally raised animal foods, and the soy versions offer even less benefits and are not healthy substitutes for real food, no matter how we look at it.

In the case of the various new-fangled “meat” products, the use of nutritional science derived from traditionally consumed soy products (miso, tamari, etc.) is often used to support highly processed soy foods. This practice of nutritional indoctrination is deceptive and misleading, yet unfortunately it permeates both the commercial and natural food industries. The same science is applied to substitute meat products. Processed beef parts with nitrates and preservatives in the form of, say, bologna is no substitute for air-dried, grass-fed beef, and the two cannot be qualitatively compared just because the bologna has animal parts in it. Healthy, real food is just that; it cannot be substituted with imitations. If you’re going to eat these kinds of foods and quality is important to you it is far better and healthier to eat real sausages, real hot dogs and other similar real meat foods made from real, grass-fed, naturally-raised animals, free from chemicals and preservatives. So why would some of us want to include these and other food substitutes in our diets in place of real foods?

Who Is Choosing Your Food For You?

Familiarity, convenience and indoctrination are three important reasons for choosing certain foods. Many of us have been raised on processed foods. Cold cuts, hot dogs and other processed meat protein sources. These convenience products were and still are the mainstay of school lunches and weekend cookouts for many people. These modern foods offer convenience, are easily preserved through chemicals and refrigeration; they don’t require cooking and, most of all, they are designed and promoted by large companies whose sole purpose is profit, with little concern for public health—as evidenced in the quality of the majority of these products. That aside, many of us were and are raised on these processed foods, and they have become strongly familiar to us, both to our senses and to our biochemistry.

In many households, these processed animal products have replaced the traditional whole roast chicken, roast beef, lamb, duck, fresh fish or other traditional healthy animal foods; in other households, these traditional animal foods are simply given less priority and consumed less frequently. In the last few years, many health-conscious people have discovered the unhealthy qualities inherent in processed meat products and have stopped eating them. At the same time, though, many have also stopped eating traditional animal products, opting instead for the alternative diets and lifestyles offered by health food or natural food diets.

Knowing an advantageous opportunity when they see one, the soy industry recognized the need for plant-based protein sources in the rapidly growing trend of natural diet followers, and in the financial growth of the natural-products industry in general. The growing variety of soy-based protein pseudo-foods that resulted, along with a massive marketing effort including scientific backup, quickly permeated the commercial and natural foods marketplace. The effort to influence the masses with their new soy-based meat substitutes is an ongoing campaign, but is now joined by others: new meat substitutes made from fungi and other ingredients are being used to create still more artificial substitute foods, and these are all competing in the race to flood the marketplace with new products for public consumption…in other words, to take the place of real food.

So far, the greatest impact of these pseudo-foods has been in the diets of health food enthusiasts—ironically, the very people whose original philosophies and ideals, along with their founding fathers’ and mothers’ commitment to quality, were centered on the importance of quality in growing, processing and manufacturing of whole foods!

While the marketing of these new products may have misled many vegans, vegetarians and other health food enthusiasts to believe they had a greater variety of healthy protein sources with which to help balance their diets; the products also struck a familiar chord from the past. Followers of natural diets no longer had to pine for those processed meat products of their pasts: now they had substitutes! The new ersatz versions tasted just like the foods they’d left behind, plus they were low in fat and cholesterol and, let’s face it, they were convenient, too. So convenient, in fact, that for many naturalists, these new substitute foods have come to comprise as much as 50 percent or 75 percent of their diets. Sometimes even more.

Unfortunately, this is not a good thing, and I say this with great respect for those who are attempting a true vegan or vegetarian diet based on whole foods, as well as from my own experience as a teacher and counselor of whole-foods nutrition. Some of the well-documented health problems resulting from this imitation-food phenomenon include: hypothyroidism and other hormonal problems, loss of hair, pallor of skin, loss of muscle tone, fatigue, digestive distress and a host of other problems. According to some researchers, some of these problems may be irreversible, yet advocates of the soy protein myth and other fabricated “health” products are quick to quote numerous scientific articles touting the benefits of these imitation foods.

Who is behind these scientific articles? And what is their real agenda? One can also find numerous scientific reports touting the benefits of vegetable oils on cholesterol, heart disease and other health problems—but scientific evidence can also be found that shows these same polyunsaturated vegetable oils contribute to these problems. To anyone willing to take a close look at the science of nutrition, it becomes clear that nutritional science, like so many types of “expertise” today, has a price. Not only that: just about every product on the market, be it natural or non, has two sets of conflicting nutritional data—one version to support it and another to debase it.

Convenience foods were designed to make life easier for modern civilization. We all can certainly use a little more convenience in our lives, but the commercial line of these pseudo-food products have resulted in the unbalanced tradeoff of diminished health and vitality for convenience. In addition, the natural versions of these convenience foods are no better than the original ones they were made to resemble.

This serves as a valuable lesson for all of us: nutritional science alone should never be used to make healthy dietary choices. Due to the influence of industry agendas, conflicting data is rampant in nutritional science. In fact, before even considering the “scientific evidence,” it makes sense to first use common sense and look to tradition. Only then can one be free from the quagmire of confusion created by nutritional science.

It is common for modern nutritional science to have no basis in nutritional traditions; it is far less common for nutritional traditions to have no basis in nutritional science. Therefore, one should exercise caution when believing scientific reports on nutrition that lack a basis in traditional nutrition. Learn to make your own choices for yourself and your loved ones using tradition, commonsense and science.

The Whole Story

Two of the greatest barriers that often stand in the way of discovering our dietary needs can be found in deeply entrenched habits and beliefs about food and nutrition. Habits and beliefs about food, while different in meaning, are both linked with that invisible umbilicus that feeds and nourishes the ideas and concepts that created them in the first place: our appetite. Therefore, when choosing how and with what we will nourish ourselves, it is of utmost importance to constantly challenge our concepts and beliefs through other sources of information and traditions.

When using scientific information, we also need to seek both sides of the story. For example, current nutritional science states that tomatoes are one of the foods with the highest content of lycopene, a substance said to be highly beneficial to health. Because this information has come from what most people consider a reputable source of scientifically published papers, it is quickly assimilated as a new belief. For many, this means they can translate this newfound data into a habit or reason for eating lots of tomatoes: to do so means assuring that they will get all those healthy benefits lycopene has to offer.

However, the very same science responsible for the isolation of lycopene in tomatoes has also isolated other not-so-healthful, toxic alkaloid compounds that have been adversely linked to the very diseases lycopene may help to prevent! When both sides of the available nutritional science are reviewed, we come up with something like this: Tomatoes have a powerful substance that might help protect us from disease, but they also have other toxic substances that may contribute to disease. Thus, we have scientific data on a food that is both helpful and harmful—but only the helpful is emphasized for advertising purposes. We could go on with detailed scientific reports supporting both sides of the story, but the point of the matter is this: when making food choices, it is best not to form an opinion based on one source of information alone, regardless of who or what it is.

I use tomato as an example because it is a food that has become common and widespread in just the last hundred years or so. While its history as a global food has been only recent, tomatoes and people have adapted to each other; tomatoes are now one of the most popular and commonly consumed foods. For many people throughout the world, these unusual fruits of the nightshade family are consumed in some form or other and recognized as highly suitable to a nutritious diet—and now, with the science of lycopene backing it, even more so. But simply because we have discovered a food that seems to suit us or can be supported by scientific research does not mean it is something that should be eaten all the time, as has become a common habit for many people. The idea that, “if a food is good for you, then it is good to eat more of it,” is not a wise dictate to uphold. More does not equate to better, regardless of the food in question.

The tomato is not the only food that carries potentially harmful components; many other healthy plant foods do as well. Many plant foods contain toxic lignans that can serve as protective mechanisms for plants. This doesn’t mean one should consider tomatoes an unhealthy food, nor any other traditional food for that matter; it means that we should be well informed about a food and its historical uses before we formulate a belief based on a single component that could lead to a regrettable habit of excess based on half-truths promoted for the sole purpose of profit.

These same one-sided nutritional science agendas can be found with other foods as well; coffee, chocolate, flax oil and soy are just a few. The need to discover our personal dietary requirements is a matter that takes time, information, common sense and some experimentation.

Eat Globally

Experimentation with food is not something to fear, nor is it something one should do with careless abandonment. The best way to begin experimentation is through traditional and ethnic cuisines. When approached in this manner, exploration of new foods can not only be an adventure, it can also be a most enjoyable and rewarding experience.

For example, ethnic cuisines often include spices and herbs—seasonings long known and proven to have health benefits among traditional peoples. Many people have had little experience with some of these properties and flavors. Even people within many ethnic groups have experienced only their own foods and seasonings. It is important for all those living in our modern century to experiment with global foods, to find those with time-honored traditions of health and healing and to support the continuation of these foods through their original seeds and sustainable agricultural practices.

Global foods offer the most extensive choices for establishing a healthy diet. Experiencing other people’s traditional foods can give us deep insight into ourselves and other people’s cultural patterns, philosophies, cosmologies and rituals, since food has always played a profoundly important role in the lifestyles of traditional cultures.

Truth or Dare

Not only can we experience newfound health benefits and new tastes when incorporating global cuisines in our diets, we also experience new textures. Some of these may be familiar, yet the familiarity could be in relation to a current unhealthy food habit. Replacing this unhealthy food with a newly discovered healthy food or foods having a similar texture can help satisfy the desire for that texture and open new doors to better health. Yes, even food textures can define how we make food choices!

Are you one of the many people who have the deep inner urge to crunch? A potato chip, corn chip, cracker…anything you can put in your mouth that crunches when you bite into it? Even worse, do you need to crunch so everyone in your immediate environment can see and hear it? Are you even aware you are doing it when you do it?

Once, while attending a pre-celebration dinner after one of my lectures, a young, rather vivacious young woman came up to me and began talking to me while crunching on crackers and dip, crumbs falling out of her mouth as she spoke. I tactfully maintained my composure as she interrupted a conversation between me and a close friend of mine. My friend, known for brazenly speaking his mind, stared at this cruncher in disbelief for a moment, then asked her how in the f— she thought anyone could understand a word she was saying with her mouth full, and how disgusting she was. He then walked away. The young woman’s response was, “What did I do?!”

This is an example of how powerful an influence food textures can have on developing habits and food choices. While speaking with one’s mouth full is not proper etiquette, in this case it wasn’t the full mouth so much as the fact she had to continue crunching in front of us and everyone else while trying to converse.

Rarely are our food habits formed by sensible and conscious choices; more often than not, they are formed through an individual’s belief or through someone else’s belief. This can come in the form of popular diet books, a health practitioner’s charisma or expert opinion, the identity found through the camaraderie of a group of like-minded people or other very common sources that we have all been influenced by, to some degree or other. These other influences, positive or negative, often include parents and relatives, peers and friends.

Whoever or whatever the influences on your dietary choices are, it is important to question how realistic they are for you.

One of the most powerful influences on our food choices today can be found among diet groups. Let’s consider the example of group influence on our dietary choices.

Choosing The Natural Path

My experience with natural food diets has shown there are two primary reasons for choosing one of the many types of health food diets.

The first reason is public exposure of the intolerable and abusive treatment of factory-farmed animals, along with the chemical processing of these products and their resulting poor quality. One would be hard pressed to argue in favor of the negative effects on one’s health from consuming too much processed and refined foods laden with preservatives. However, these facts can also be misconstrued to mean that any and all farmed animal products are detrimental to health.

Many vegetarian crusaders, intent on the elimination of the atrocious practices involved in raising factory-farmed animals, are either blinded by their cause or simply unaware of traditional methods used to raise animals for food and of the important role naturally-raised animal products have played in a healthy traditional diet for many thousands of years. This one-sided perspective has lead to many extremist reactions against the human consumption of any and all animal products. While this point of view is prevalent in many natural diet philosophies, it is most evident in the plant-based raw foods groups (there are animal-based raw foods groups as well) and the vegan groups. Ironically, it is within the vegan groups that we find the most extensive use of imitation animal products made from highly processed soy, fungi and other ingredients.

The second common reason for choosing a natural diet and lifestyle is as a way to define one’s personal identity. Human beings have a long history of identifying strongly with a wide variety of lifestyles. Religions, professions and other cultural factors have long offered ways in which we can develop identities to which we can “belong” and with which we can label others and ourselves. Identifying with a particular diet group and even becoming a proud model for and example of what it represents is a common path to distinction in today’s world. For many, the need to be part of a group with high ideals is very important and many diet groups offer unique combinations of philosophies and lifestyles. The foods that make up a particular diet are often used to feed the identity of an individual; at the same time, those foods prohibited by the diet are often the ones that end up actually defining the individual.

Even if diet groups with leanings toward moral, spiritual or philosophical agendas are your thing, you still need to examine whether or not you are thriving while nourishing yourself with your chosen approach. It’s easy to find support in sub-culture diet groups such as raw foods, vegan, macrobiotics and others, but it is just as easy to lose this support if you question or challenge leadership or the basic dictates of the regime, or if you stray from the path. Intolerance of dissension, criticism, and even persecution are common within such groups. Knowing this in advance can be helpful for those exploring new paths to health with diet groups.

Another vitally important thing to know about diet groups is that when health issues arise, whether concerning oneself or others, it is essential to search for consistencies within the group before accepting stock answers from experienced group leaders. Nutritional deficiencies are common in natural diet groups and can manifest as hypothyroidism, loss of libido, loss of menstruation, premature aging, premature hair loss, arthritis, extreme weight loss or weight gain, and more. It is not uncommon for nutritional deficiencies to be addressed as a “cleansing experience,” a “period of adjustment to the new diet” or as caused by “straying from the diet,” or even criticized as being symptomatic of ineptitude or a lack of understanding. While some of these assertions can be true to some extent, if several cases of similar deficiencies show up among other followers in the group, it is important to consider making improvements and changes in your diet.

In fact, it makes the most sense to look for these problems first, before diving headfirst into something that could cause more problems than what you began with. Look first at the children, as they tend to be the first to be adversely affected by nutritional problems. Be cautious, stay grounded and remember that while there can be many health benefits and much to learn from diet groups, your safest approach will be one where you seek out the rational and traditional aspects of the diet and avoid the extremes.

Reality Checks

Here are a few tips to help get you through the often-confusing experiences of natural diet hopping. Start by asking yourself the following questions.

Does your diet consist of a wide variety of wholesome natural foods?

If you are interested in health, and not just weight loss, you need to consume a diet consisting of mostly natural whole foods and not pre-packaged artificial products, which may in the short term help you lose weight but will do little to support your health. Contrary to what many people think, not all fad diets are “health food” diets. Quite the contrary, most modern fad diets are comprised of fake foods loaded with preservatives and designed for the sole purpose of addressing the issue of overweight. Fad diets that are based on natural foods, on the other hand, should at least be made up of real foods, although this is not always the case, either.

When a diet is said to be comprised of natural whole foods, it is important to understand that this does not mean natural “designer foods” that contain a natural ingredient or two but cannot be found anywhere in a traditional setting before 100 years ago. So far, modern food technology has not been able to improve on traditional natural foods, other than in the capacity to store foods through refrigeration and packaging. The addition of synthetic vitamins and nutrients to factory-farmed foods grown on depleted soils is not an improvement, nor are faster growing times or fatter animals derived from growth hormones, unnatural feed and inhumane animal treatments. For the most part, while some natural diets can lack important sources of nutrition commonly found in most traditional diets, natural foods based diets are good places to begin one’s journey to better health.

How much of your diet is comprised of “substitute foods”?

Having already established the importance of a diet based on natural foods, we will approach the remaining questions and comments from the basis of a natural food diet. Substitute foods can be classified as foods that offer little quality nutrition but are eaten for fun, to satisfy hunger, to satisfy emotional states; junk foods or binge foods that are not on your particular diet (and if they are, then they may not necessarily be recognized as junk foods). For convenience, we can break down substitute foods into two categories.

1) Commercial substitute foods: packaged pastries cakes and cookies, ice cream, candy, fast food, soda, chips, some deli foods, flour products (breads, bagels, rolls), processed dairy products etc. Now, you may be thinking that this category does not apply to you, because you are on a “natural diet.” Well, maybe they do not apply to you—but most people following natural diets of all types frequently indulge in large quantities of these food products.

2) Natural substitute foods: soy products made from soy protein isolates, soy milk, non-dairy ice creams and cheese substitutes, margarine (all types), candy, coffee, nutrition bars (containing soy, refined oils and synthetic vitamins), packaged foods (cookies, chips, breakfast cereals, etc.) made with soy, safflower and canola oils. There are more, but these are the ones most commonly consumed in large quantities by natural food enthusiasts. These foods are found especially in natural food stores, yet they can also be found in specialized sections of most grocery stores.

Keep in mind that this is simply a reality check and not some harebrained idea about making you either perfect or guilty of dietary sabotage. We all enjoy the freedom to explore and indulge in things that are not always beneficial to our health. While this rarely poses a problem with any healthy diet, a quantity of insalubrious foods, along with the common misunderstanding that some of these foods are healthy, compared to traditional foods, can and often does contribute to problems.

Going back to the question, “How much of your diet is comprised of ‘substitute foods’?” we can get more specific: What percentage of your current diet is made up of these foods: 10 percent, 30 percent, 50 percent, 75 percent, 100 percent? What can be said about these percentages?

If you answered “10 percent,” congratulations! Your diet is probably quite balanced and satisfying on many levels. It is perfectly normal to consume things that do not fit your ideal diet; everyone does it.

If you answered “30 percent” or “50 percent,” either you are confused about what is truly healthy for you, or your diet is lacking important nutritional sources, and you may be unconsciously or consciously trying to compensate for a lack of higher quality nutritional sources. This isn’t going to work. Before long, you will begin to notice a decline in energy, and possibly other negative symptoms as well. Your diet needs some serious revision.

If you answered “75 percent” or “100 percent,” you are fooling yourself: you are not eating a healthy diet. Not even close. It is time to start over.

Is your diet fulfilling the promises associated with it?

While you might think your diet is special as compared to others, when it comes to “diet promises,” yours is exactly the same as all the other health diets. Healthy diets promise the following: youthful appearance, weight loss, increased energy and vitality, beautiful skin, optimal nutrition, minimal cravings, improved digestion and more. These are reasonable promises that a truly healthy and balanced diet (along with moderate exercise and a positive attitude) should be able to fulfill, no? I think we can agree on that.

So, what do you think? Has your diet fulfilled or is it fulfilling its promises?
If you have been on your diet for four to six months, there should be no reason why many of these criteria should not be starting to manifest in a positive light, unless you are seriously lacking exercise or very negative about life in general. Granted, if you are suffering from chronic digestive disorders and have been for ten years or more, it may take a while before you are happy with your results. Even so, a healthy, balanced diet should result in some noticeable improvements within a relatively short time. The same goes for the other promises. If your diet is what it promises, you should be getting results.

Let’s consider one of the above promises: increased energy and vitality. This one is a major appeal for many people choosing a new diet. How does yours fare in that department? Has your diet given you more energy and vitality on a regular basis? Hmm, think about that one carefully before answering. Got your answer? Good…

Now: remove coffee, tea, chocolate and all other stimulants from your diet and answer the question again. Uh huh. I know I may not have gotten you with that one, but I guarantee that I got at least 75 percent of those reading this article. These stimulants are a good barometer with which to gauge the “increased energy and vitality” factor of your diet. The more you need of these stimulants, the less your diet is fulfilling its promise. Again, I want to emphasize that these foods used in moderation do not define dietary characteristics—but when they comprise a large percentage of your natural diet you are obviously dealing with false promises and will at some point have to face the reality check.
What keeps you inspired to stay on your diet?

Is it convenience? Have you figured out simple ways of incorporating the essentials into your lifestyle, or is your diet simply something you return to now and then for reassurance in some area of your life? Do you need your diet to help define who you are—a vegan, raw foodist, a macrobiotic, Atkins? If so (even if only partly so), what does this definition do for you? Is it because you have made many friends through your dietary choice, and these friends make up most of your world? Is it because some admired celebrity proclaimed his or her allegiance to the diet? Is it the philosophical leanings, e.g., animal rights, a peaceful world, non-violence, free love? How about weight control, energy and the other reasons mentioned above?

Whatever your reasons, it is important to consider whether they still apply to you at this stage of your life, and how those reasons weigh in as priorities compared to the essentials of a healthy diet.

Essential Perspectives For Those on a Healthy Diet

1) Foods that have been consumed by traditional peoples for thousands of years and have contributed to robust health are not impure, bad or “less spiritual” than other foods.

It is very common for some diet programs to outlaw certain traditional foods and preparation methods based on a particular agenda. Grains, beans, animal products, cooked foods, raw foods and fats are some of the foods and preparation methods that can be taboo in some diets. Avoid judging real foods with simplistic terms such as “good and bad.”

The question of what is healthy food is not a black or white issue. While you may choose to follow the dictates or ideals of a particular diet, always remember before making a judgment about any traditional food that there is the issue of quality to consider. People have thrived physically and spiritually for thousands of years on the very foods that may be taboo to your current diet.

Furthermore, there is no evidence that quality sources of these taboo foods, in and of themselves, when used in a balanced and healthy diet, have caused the problems your regime might accuse them of causing. Because many people judge food based on “good or bad” does not mean this judgment is correct.

For example: “Fat is bad.” Not true. Fat is essential for metabolic functions, among other things. Some fats are very efficient at assisting in these functions, whereas others can inhibit these natural functions and are generally harmful to the human body. You always have a choice as to what you eat and do not eat, but be careful not to get caught up in the bad food–good food “diet agenda” to the point where you think you are better than someone else because of your newfound “diet identity.”

2. Know your limitations and habits, understand balance, and remember, “more is not necessarily better.”

This applies to any food. Falling for the “more is better” idea leads to extremes and contributes to eating disorders and a host of other problems.

What is familiar from the past will tend to be brought into the present. Be aware of how your old habits influence your new diet. For example, someone may have had a habit of eating a bag of potato chips for lunch with a soda. On his new diet, he eats a bag of “natural” potato chips with soymilk or a “natural” soda. This is not an improvement over the old diet.

Another example would be an individual whose past eating habits, consisting of large amounts of cold deli foods and iced beverages, have contributed to chronic constipation and digestive distress. On his new diet, he brings the same familiar pattern to the “natural” deli and does the same thing as before with better quality foods. While the quality of the ingredients has changed, the eating pattern and habit has not, so the digestive distress continues unabated.

Other past influences one must be careful of bringing to a healthy diet include food fears, emotional eating (eating to compensate for emotional distress), fanaticism and obsessive-compulsive eating. Some “natural diets” can actually contribute to these problems—so be careful.

3. Question your beliefs.

If you are not thriving on your diet and do not feel well, question everything about the diet and your practice of it until you find out where the problem lies.

It is okay to question the diet and the philosophy. In fact, it is essential that you do. Don’t let yourself get to a place in your health where you will regret what you have done. Be prepared to make changes in your diet. Dietary changes are inevitable. Some of these changes may require that you explore and embrace ideas contrary to your present beliefs.

Remember the diet promises list.
Most of all, give yourself permission to enjoy a wide variety of wholesome foods and celebrate life!

Dangerous Brains

September 13, 2004

Diet Extremism in the 21st Century
Part 1: Reexamining the Case for a Low-Carbohydrate Diet

Proponents of a high-protein, low-carbohydrate diet use the following general outline of human evolution and reasoning to support their claims:

2.5 Million Years Ago: Primitive humans begin to use stone tools and hunt small animals.
CLAIM: Until this time, our earliest primate ancestors consumed a diet much like modern chimpanzees. As a direct result of the increased nutrient-dense protein foods—i.e., small animals—our ancestral hominids’ brains begin to increase in size.

1.8 Million to 500,000 Years Ago: Homo habilis and Homo erectus evolve into Homo sapiens.
CLAIM: These three species have larger, more fully functional brains than their hominid ancestors and have the capacity to figure out how to hunt big game.

200,000 to 100,000 Years Ago: Neanderthal appears and is accompanied by modern humans. They cohabit the planet and eventually Neanderthal dies off. Fire and cooking are discovered.
CLAIM: For the next 190,000 years, humans continue as hunter-gatherers.

10,000 Years Ago: Modern humans begin farming out of necessity.
CLAIM: Through the advent of agriculture, man begins to cultivate grain, thus increasing his carbohydrate consumption. This 10,000-year period is insufficient time for modern humans to physiologically adapt to a high carbohydrate diet from grains, so the health of Homo sapiens begins to decline.

CONCLUSION: The best diet for humans is that of our Paleolithic ancestors—the diet on which we have evolved. This is proven by the fact that our Paleolithic ancestors were healthier than our more recent agricultural ancestors.

This nice, neat conclusion, unfortunately, doesn’t stand up well to close inspection, and its supporting claims are riddled with contradictions and inconsistencies.

For example, throughout these long stretches of time, various human species have evolved and disappeared. The “established dates” of the chronology given above are spurious, as such efforts at fixing dates for evolutionary epochs vary widely according to what and whom you read and when the material was published.

Experts vary so in their opinions and conclusions because information is constantly being updated and revised through new discoveries. There are numerous inconsistencies in fossil records. The result of this shifting picture is that all the foregoing conclusions and inferences remain little but unproven theories. Indeed, upon close examination, they raise more questions than answers.

How are we not physiologically evolved to handle high quantities of carbohydrates if our earliest ancestors, during their ape and chimpanzee stages, consumed them almost exclusively in the form of fruits and other plants for who knows how long?

If we are going to subscribe to the theory of cultural evolution, we are accepting the fact that our ancestors had to adapt to eating larger quantities of meat and fat long after already having established a physiology designed to handle large quantities of carbohydrates as a primary source of food. There is no reason to believe that large portions of these carbohydrate foods did not stay with evolving hominids all the way through the evolutionary process and right up to the present, as is evidenced in many modern hunter-gatherers.

Is it likely or even possible that once our so-called “earliest ancestors” got the taste for meat and fat, mostly in the form of small animals, they then simply gave up all the carbohydrates they had been consuming for untold generations?

There would be only one reason for any hominid to give up large quantities of a given type of food after having eaten it for hundreds of thousands of years, instead resorting to the exhausting and dangerous task of hunting big game: if the environment stopped producing it.

Why would the environment stop producing an abundance of plant foods? This might occur during long periods of consistently freezing weather. Ice ages (another theory) undoubtedly had a major impact on diet and health and represented long interruptions in our ancestors’ patterns of living. Arctic-type climatic conditions would have resulted in low carbon dioxide levels, thus contributing to less and smaller plant growth.

This scenario certainly would have encouraged phases of higher animal food consumption, leading many early hominids to hunt full-time. Temperate climate phases, on the other hand, would have brought an increase in plant and other easily gathered foods, similar to those in modern day hunter-gatherers’ diets.

Why did it take so long to create stone tools and go for the big game?

From 2.5 million years ago to 1.7 million years ago, our ancestors were still scavenging and eating small animals. Ancestral hominids finally reached the point where they could now use their larger brains to hunt big game. So says the theory. However, current findings have some paleoanthropologists now leaning toward the scavenger theory for Paleolithic humans as a means of obtaining the majority of their unpredictable sources of animal foods.

It is well known that most plant matter is scarce in the archeological record because it decomposes without a trace, unlike bones and pottery. If a variety of carbohydrates were available from the earliest times whenever the climate was suitable, it then stands to reason that they would have been available and consumed in large quantities continuously throughout the evolutionary process. It is always risky to compare modern hunter-gatherers to those of Paleolithic, Mesolithic, and Neolithic times because many of the food sources from those earlier periods are no longer available, and those that are have gone through various changes over the eras.

Does the big game diet fare any better in our nutritional history of brain advancement?

Concerning the increased consumption of meat and fat from small, easily obtained animals and its correlation to the increase in our ancestors’ brain size, this is a reasonable assessment, because nutrient-dense foods do nourish the brain.

However, what about the time it took brain size to develop? Did humans just sit around for nearly a million years, drooling and fantasizing about the large prey so abundantly available in their environment while consuming small animals and fruits, until they finally figured out how to put their large brain to some use so they could hunt the big game? Well then, so much for the attribution of brain development to nutrient-dense protein foods in the form of small animals. These foods may have contributed to brain size, but they obviously didn’t help much to promote brain function.

It is true that most varieties of game contain omega-3 fatty acids, zinc, L-carnitine, B12, and other vital nutrients utilized by the human body for brain function, strength, and immunity. Remember, we are assuming that our ancestors’ larger and more developed brains gave them the capacity to hunt, and that the consumption of larger quantities of nutrient-dense animal foods likely played a role in further brain development.

QUESTIONING HUMAN EVOLUTION

The second stage of advancement occurred around 1.7 million to 500,000 years ago when our hominid ancestors gathered their stone tools and took the risk of hunting big game.

Now, one must question whether they did this to supplement their existing diet of plants and small animals, or did they give all that up for the big stuff? It is still too early to tell how these slowly evolving big-game hunters are doing with their fully evolved brains as compared to their small-game-hunter ancestors, who took what seem to have been forever to get it together to hunt big game.

With the theory of slow, gradual ascension from apes to modern humans entrenched in our textbooks, it is no wonder that the image of the lumbering Paleolithic brute is so well established in people’s minds, even though most paleoanthropologists today are doing their best to change this conditioning. This is likely to remain a difficult challenge as long as the theory responsible for this image has not changed.

At this point, we have a hypothetical historical situation in which our earliest ancestors consume mostly fruits, insects, and occasionally small animals as a regular diet, much as apes and chimps do today. Then, according to theory, 2.2 to 2.5 million years ago a special line of hominids evolves. These new hominids have to learn to spot predators hiding in the tall grasses of the savanna in Africa. Since they no longer live in trees, they begin standing upright in order to see over the tall savanna grasses, and are the first species to become truly bipedal.

But hang on. Have you ever actually seen the tall grasses of the savanna? Even standing on someone’s shoulders and looking out over the tall grasses of the savanna, one would be hard pressed to spot a low-crouching, camouflaged lion. Anyone giving this idea even a little commonsense thought can see the absurdity of it.

Moreover, why didn’t any of the other monkeys, apes, and chimps evolve beyond their original status? Were they so frightened by crouching predators (and are they still today) they were able to ignore evolution and refuse to venture from the safety of the trees (and still do so today)?

The theory of human evolution is described in part as a primitive species going through processes of gradual change, through which this species eventually evolves into a more efficient and thus more advanced species. We start with nonhuman hominid ancestors whose diet consists of fruits, other plant parts, insects, and a greater quantity of small, easily hunted animals, which may eventually contribute to an increased brain size. Loss of body hair for ease in perspiring and other morphological changes also take place at some point.

These hominids evolve further and develop skills to hunt large game, which results in increased meat consumption. Isn’t this heavy game consumption really an example of adaptation to a food that was foreign, at least in large quantities, and apart from small animals? At this point, our hairless hominids are really eating the same diet they have been eating for millions of years, with the addition of greater quantities of large game.
What happens next?

MEAT-EATING NEWCOMERS

Neanderthal and anatomically modern humans eventually appear out of the same evolutionary branch as Homo habilis and erectus. The appearance of both groups comes to fruition about 200,000 years ago (some say 100,000). As noted in the outline above, fire is discovered for cooking at about this time; indeed, the advent of cooked food may have been an additional influence on larger brain development.

Modern man and Neanderthal continue the patterns of big game hunting for around 100,000 more years. Neanderthals were robust and mighty hunters with fully developed brains; yet, whether by some trick of fate or simple ingenuity, it is we smaller-brained modern humans who survive and the Neanderthal who die out.

Now, I enjoy a nice barbecue with my friends and family on occasion. Still, one cannot deny that over the past several decades, we have been bombarded with scientific studies showing the dangerous, cancer-causing effects of grilled and charred foods, especially meats. I cannot help wondering how researchers can claim that our Paleolithic ancestors were so much healthier than later farmers. Isn’t this pretty much exactly how our big game, Paleo-hunter ancestors were cooking on a regular basis? After all, they had not yet learned to manufacture cooking pots: their food preparation had to be similar to modern grilling, only more intensely so due to their food being in direct contact with flame.

Direct flame is well known to cause high levels of free radicals in the blood of subjects consuming charred food. Most of the evidence found relating to food in prehistoric kitchen middens and other Paleo sites has been from charred remains. If this charred food represents the largest part of our ancestor’s diet, then antioxidant foods, such as fresh vegetables, could not have helped much to counter the effects of the toxic free radicals.

Either all the modern studies of the cancer-causing effects of charbroiled meats are way off the mark—or the healthy eating habits of our Paleo ancestors have been grossly misinterpreted.

Or perhaps both conclusions are circumstantial and dependent on other influences.
The 200,000-year-old discovery of fire use by early Homo sapiens fit nicely into the cultural evolution model—until a recent find in the northern Dead Sea valley (Associated Press, April 29, 2004). This new information places fire use at 790,000 years ago, throwing the evolutionary timeline wildly out of kilter.

At a campfire near an ancient lake in what is now Israel, researchers found evidence of meat consumption, including bones that had been broken to extract the marrow. Who these hominids were is unknown, but it is suggested they may have been a transitional form of Homo erectus, the precursor to Homo sapiens (modern humans).

Does this report mean that cultural development could have occurred 590,000 years sooner than we thought? If so, we could be facing a significant modification in our cultural evolution theory. Or do scientists need to recalculate the arrival of Homo sapiens? At any rate, the effort to reconcile this new timeline for fire use with the accepted orthodox timeline boggles the mind.

There’s more. Orthodox theory implies that we are “just surviving” for 120,000 years, often threatened by a harsh environment and forced to hunt large game. Recall that Neanderthals are robust, great hunters who possess larger brains than we have today, as do Cro-Magnon, our relatives. Yet, both go extinct. This apparent contradiction may be partly explained by the evidence of the worldwide practice of cannibalism, a taboo subject with the low-carbohydrate-diet enthusiasts. Perhaps the physical taming of fire has little to do with taming the spiritual fire, which while hard to quantify is equally necessary for civilization.

If we consider only the smaller-brained Homo sapiens and evaluate our progress for the last 200,000 (or is it 790,000?) years, we reach the mathematical conclusion that it takes us at least 190,000 years of hunting big game before we figure out that we can domesticate animals for food instead of chasing them down and competing with dangerous predators.

This means we are still foraging, hunting, gathering, scavenging, caving, and roaming right up until around 10,000 years ago. (So much for the thousands of years of including large game into our diet and the effect it supposedly had on the development of our brain.)

SMALL BRAINS, BIG IDEAS

Suddenly, as if out of nowhere, Homo sapiens begins agriculture in several parts of the world at once. Whole grain is cultivated and added to already existing nutrient-dense protein food sources; human brain development leaps to unprecedented heights.

We continue hunting to supplement our domesticated plant and animal food sources. Furthermore, we move out of the caves, start making clothes from plant fibers, build monuments, and begin learning astronomy, geometry, and other sciences. With this new dietary addition of cereal grains, we become fully civilized in a very short time. Yet before this, when on hunter-gatherer diets, Homo sapiens progresses very little even over the course of hundreds of thousands of years.

Perhaps grain and plant cultivation combined with animal husbandry is cause enough for this giant step in human progress and brain development. Could this extraordinary development have been the first and only such dramatic shift in humanity that occurred since our sojourn out of Africa? Stone tool development, hunting, the discovery of fire, some ritualistic burials, and cave artwork are explained easily enough through orthodox theory. Many of these developments are said to have occurred within the last 40,000 years. However, Homo sapiens’ leap to agriculture and civilization is not as easily explained.

For almost 200,000 years, our ancestry makes few basic changes, but 10,000 years ago, we suddenly make extraordinary changes with the advent of agriculture, which carries us to the present era, in which we have already been to the moon and are on our way to Mars.

Is it true that our relatively large-brained Paleolithic human predecessors could not have thought of smelting copper or other metals? Did they not have the desire enough and the one inventive mind among them to build a flying glider, even out of simple wood-and-tar materials? And is it true they did not build any cities for 190,000-plus years?

Is the story of prehistoric humankind really nothing but a grim struggle for survival?
The expert might respond, “Well, we haven’t found any evidence to indicate anything other than what orthodox theory suggests.” My response is, “That is not true. Keep looking—because what we do know at this time is limited to the knowledge of a few ancient historians and a couple of hundred years of excavations and scientific research, much of which is filtered to fit the prevailing theory. More information is being discovered almost daily.”

Personally, I have not given up on our ancestors, the ones who gave us the knowledge to go beyond the basic need to survive and to explore our universe. Rather than being forced into agriculture about 10,000 years ago, as some historians hypothesize, many early Homo sapiens most likely begin the practice of agriculture many thousands of years earlier. Having had to abandon these practices, many were later forced to live in caves and subsist on mostly game, fish, and other wild animals because of extreme climatic changes due to catastrophic events as indicated in traditional legends throughout the world.

THE CARBOHYDRATE QUESTION

Some adherents of low carbohydrate diets suggest we should eat like our Paleolithic ancestors. But what exactly does that mean? Which ones should we be eating like—the ones who ate more plant foods or the ones who ate more animal foods, the humans or the other hominids? How about our cannibal ancestors? Our Paleolithic ancestors’ diets varied as greatly as do those of today’s traditional hunter-gatherers around the world.

Most of today’s low carbohydrate diets lean toward the high protein group and some suggest little to no carbohydrates; others permit tubers, fruits, and honey. For Paleo-diet advocates, grains are usually discouraged, especially wheat, barley, and other gluten-containing grains. There are numerous problems with this extremist thinking. There is no doubt that protein and fats are necessary for proper metabolic balance and general good health, but so are highly nourishing carbohydrates in the form of cereal grains.

Paleo diet advocates often cite the Egyptians as an example of why one should not eat grains. Some Egyptians statues have bloated faces and waistlines, and fossil evidence of some Egyptians reveals low bone density. These and other problems are often cited as caused by a high carb (grain) diet. While this is true to some degree, any Egyptologist knows that in its long history, this extraordinary culture experienced widely varying periods of abundance and scarcity. The Nile, their primary source of water, was susceptible to extreme flooding at times; drought was also experienced periodically, sometimes lasting for generations. During times of abundance, Egypt’s culture thrived, with a wide variety of health sustaining foods. Cattle, vegetables, grain, fowl and many other foods were essential components of their diet. In difficult eras, the culture suffered and had to make due with stores of grain and other foods that could be preserved.

Archeologists have found an essentially similar pattern of extremes in every ancient civilization; therefore, to state that all Egyptians throughout history were weaker and less healthy than Paleo hunter-gatherers because of a high grain diet is absurd and doesn’t take into consideration the many other important factors and changes in Egyptian history. Paleo hunter-gatherers were not exempt from such extreme dietary deficiencies either (as is evidenced, for example, in periods where cannibalism was prominent) as is common in modern day hunter-gatherers.

However, the negative attitude toward carbohydrate consumption has little to do with whole grain consumption; it is rather based on the imbalances caused by highly refined and denatured products derived from poor quality, refined grain products and the large consumption of pharmaceuticals. We should face this problem not by eliminating carbohydrates, which are necessary for good health, but by replacing our modern, refined versions with naturally grown complex carbohydrates, such as those that have been consumed in cereal grains by traditional agriculturalists around the world for at least 23,000 years.

THE AGE OF DESIGNER DIETS

A balanced traditional diet consisting of a wide variety of quality fruit, vegetables, grains, legumes, nuts, seeds, and animal protein supplies the widest spectrum of nutrients, well beyond what any modern fad diet can claim. The most extraordinary thing about this is that hardly anyone in the modern world has experienced this type of traditional diet for at least two generations! Instead, we have been overwhelmingly consuming a diet of highly refined carbohydrate foods accompanied by processed, factory farmed animal protein, toxic trans-fats, and loads of pharmaceuticals. Any close evaluation of these modern food sources reveals the truth about why health is on the decline and obesity is on the rise—as well as why there are so many extreme dietary trends.

We are living in the age of designer diets. There are diets for every ill and every belief. I have yet to meet one person on a designer diet who, before commencing that new diet, had eaten a high quality, balanced diet of traditional foods.

It is natural to some extent that such designer diets appeal to us: we seek release and relief from the debilitating effects of processed foods that we have been manufacturing and consuming for over the last century or so. Sometimes the appeal of a particular designer diet lies in the fact that we are trying to escape from (or to compensate for the effects of) some other designer diet that didn’t fulfill our desire for weight loss, more vitality, or some other health goal.

Much of the food consumed by most people today consists of processed, denatured foods, laden with preservatives and many other nonessential substances that do little to support health and much to detract from it. More and more people are realizing this, so they naturally seek ways to remedy the problem and reestablish their health.

Enter designer diets: lots of them, and all different too—yet all of them offer similar promises. Those that stress natural unprocessed, and preservative-free foods help us to gain insight on the importance of quality in our diet. These are the only ones worth considering if we truly want to be certain what we are eating is healthy. However, even among these more naturally oriented approaches to diet, the balance of nutrients often remains a very important consideration.

The first category of people seeking designer diets are those needing to escape from nutrient-deficient foods for whatever reasons, health, weight loss, etc. The second category of people seeking designer diets are those who have already tried at least one new, restrictive diet and failed.

Those of this second category try another designer diet, sometimes continuing the new diet for years, or perhaps only months. All eventually find the need to eat things not on the diet—some often, others only occasionally. This is natural: designer diets tend to restrict the very foods with which we are most familiar, and in eliminating them from our diets, we often feel deprived.

Another reason for distractions from a diet is a lack of energy due to a lack of balanced nutrition; increased use of stimulants is a common response, but sometimes these stimulants (coffee, tea, tobacco, etc.) can come to comprise a larger percentage of our diet than healthy foods. Actually, coffee, chocolate and other stimulants are good barometers for determining how well one’s diet is working: the more of them one consumes, the less effective the diet. The need for these foods, other than in moderation, is a strong message that suggests changes in one’s diet are called for.

In general, modern Homo sapiens are unsatisfied nutritionally and, thus, metabolically. This is not the result of a simple-minded generalized notion of “carbohydrate overload.” Nor is it caused by the consumption of traditional whole grains in the diet, often mistakenly classified by low carb proponents as being in the category of “bad carbs.” Quality whole grains are health-promoting foods when balanced with adequate amounts of protein, fats, and fresh vegetables.

Organic, complex carbohydrates and processed, junk-food carbohydrates are as different from one another as a chicken is to an apple. If there is a beneficial aspect of a low carbohydrate diet, it is in the concomitant suggestion by many teachers to reduce or eliminate junk-food carbohydrates. Once such a shift is accomplished and protein and fat are increased, one’s metabolism gets a jump-start from years of stagnation caused by excessive, poor quality carbs. One is then energized, tends to lose weight and experiences life differently, at least for a while. So there may be a temporary benefit in a low carbohydrate diet—but beyond that temporary benefit, it is critically important to make a distinction between the qualities of different carbohydrates and to eat a variety of health-promoting foods, including grains and legumes.

THE FALLACY OF THE SINGLE CAUSE

All nutritionally related health problems invariably result from a combination of factors. Some think obesity, for example, is caused simply by excessive fat consumption, but it is the quality of fats and the imbalance between healthy fats, proteins, and carbohydrates that principally contribute to extreme weight retention. Refined carbohydrates also contribute to obesity, often even more than fats. All essential macronutrients play a role in every diet-related illness; it is never just one thing.

Try removing one category of nutrition from your diet; inevitably, you will find yourself eating it again in some form or other. After more than 25 years in the natural health field, I have had the opportunity to hear many stories and witness many dietary tragedies. Here is a common statement I hear from those trying to adhere to a low carbohydrate diet: “When I was doing it I felt great!” This sort of past-tense expression is typical with designer diets that depend on restrictions or limited choices. Deprivation of a variety of healthy foods makes these types of diets difficult to maintain. Another frequently heard statement is, “I really enjoy the food, but I find myself cheating a lot.” Again, depriving oneself of a variety of healthy foods eventually leads to compulsive overeating and other emotional problems.

Those with allergies to grain (wheat and gluten-containing products) should naturally avoid gluten-containing grains—if they do indeed produce an allergic reaction. I say “if” because there are people who are gluten-intolerant who can still eat high quality grains traditionally prepared by fermentation, soaking, or sprouting. Also, for people experiencing this problem it is important to consider that the problem itself was not caused by high quality traditionally prepared grain as part of a balanced healthy diet. Gluten intolerance problems can be carried from generation to generation and are exacerbated and caused in part by refined-gluten-containing products. However, many of the symptoms of gluten intolerance (leaky gut, allergies etc) are duplicated in those with a history of immune-suppressing substances, especially pharmaceuticals, drugs and chemicals of all kinds. Indeed, the actual causes of these health problems may lie more in these products than in any others.

It is interesting to note how saturated fats of all types have been accused for years of causing a host of problems, from heart disease to cancer, yet today; traditional saturated fats are finally starting to be recognized as healthy and essential macronutrients. As it turns out, the problems once believed to be caused by these traditional fats were in fact caused by trans fats and refined polyunsaturated oils.

Protein, too, has been demonized in the past; now it appears to be carbohydrates’ turn to take the stand and be accused of the majority of the Western world’s health woes.

In order to prevent future problems, we must be careful not to make the same mistake we made with fats: that is, we need to avoid the simplistic error of classifying all carbohydrates in the same category. Certainly it is important to acknowledge the problem of gluten intolerance for some people, yet it is at least as important to consider the long history of grain consumption among healthy peoples eating a balanced traditional diet.

DIETARY COMMON SENSE

With quality carbohydrates, very few people have had “too much of a good thing.” In other words, not many people have overdosed on unrefined, complex carbohydrates; most have their excessive carbohydrate experience by way of refined sugar and refined grain products.

However, there are still those scant few who believe they have developed problems from the excessive consumption of good quality grains they had consumed for a number of years. While it may be true that grains were consumed in excess, grains cannot be accused of causing these people’s difficulties. In my observation, these health problems develop from a diet lacking an adequate balance of protein, fats, and other nutrients needed to balance complex carbohydrates.

There are many designer diets that promote high quality natural carbohydrates yet discourage the intake of high quality fats and proteins, often favoring incomplete protein sources or “natural” junk foods, such as those derived from nontraditional soy products that have no health benefits. Consuming even high quality carbohydrates, such as organic whole grains, without sufficient quality protein and fats can be an example of when too much of a good thing goes bad.

Indeed, this can happen all too easily with any nutritional source, be it carbs, proteins, or fats. Unbalanced high natural carbohydrate diets are no more beneficial to our health than unbalanced low carbohydrate diets. When people eat a healthy diet with a variety of healthy foods in a balanced way, they have little need for designer diets.

This does not mean you will not need to adjust your diet on occasion to accommodate life’s changes. However, when you have all the finest quality tools at your disposal, you are more apt to successfully work out life’s often complex health issues than if you have only a few good tools at your disposal and are unaware of what needed elements you’re missing. With all the proper nutrients available and utilized, weight problems, energy loss, and similar concerns are easily remedied, and you become satisfied on many levels of your being.

In other words, it is easier to address diet-related problems by starting from a balanced traditional diet than it is to start with an extreme diet that will lead, sooner than later, to other extremes.

To be continued

The above material is excerpted from the forthcoming and newly revised book by Steve Gagné, The Energetics of Food

The Origin of Agriculture

September 13, 2004

Theories of Plant Domestication

In the distant past, a few Homo sapiens made a decision that altered the biological, psychological, and spiritual essence of humanity. That decision was to work in partnership with the land through the domestication of plants and animals. Researchers from numerous scientific disciplines have made great strides in attempting to explain the origin of agriculture. However, exactly when, how, and why this happened at all is still very much a scientific mystery.

The information gleaned from extensive research in this multi-disciplinary pursuit has offered a range of insights into how humans adapt to social and environmental pressures, develop patterns of health and disease, and structure sophisticated forms of culture and civilization. These ideas about agricultural origin are rooted in the cultural evolution theory, which today serves as the basis upon which academic beliefs and ideas are formed and supported by scientific research and modern technology.

Much of this research is based on fossil records, comprised of bones, seeds, and stone tools gathered from caves, hearths, and wherever ancient settlements can be uncovered. These records help us to formulate possible circumstances for addressing the following questions:

1.Where and when were plants first cultivated and identifiably domesticated?
2. What evidence is there to support current theories, and/or what alternative interpretations can we draw from the available evidence?

The Cultural Evolution Theory, Methodology and the Fertile Crescent

The earliest domestication of plants may have originated in the Near East’s “Fertile Crescent,” an area that stretches from the eastern shore of the Mediterranean Sea and curves around like a quarter moon to the Persian Gulf.

For nearly two centuries, explorers and scientists from different parts of the world have traversed this area in search of the origins of civilization and agriculture. Einkorn and emmer wheat, barley and lentils, goats and sheep all purportedly originated here between 5,000 and 10,000 years ago. Religious texts, legends, and archeological discoveries document the antiquities of Sumer, Ur, Babylon and other thriving cultural centers. This part of the Near East housed a literal treasure trove of artifacts, bones, and seeds that would be used to substantiate the cultural evolution theory. This archeological evidence has helped create a consensus that has become the basis of today’s textbooks on ancient history.

The most thoroughly researched area of the world for the advent of civilization, the Fertile Crescent is held today as the model to which all other such research sites throughout the world are compared. The Fertile Crescent, goes the theory, is where it all began—agriculture, civilization, all of it. Indeed, this “cradle of civilization” idea is so entrenched a part of historical orthodoxy that its axiomatic status has served to discredit those pieces of evidence that seem to challenge it.

This sort of fitting fact to theory is not new in scientific methodology. Archeological and anthropological researchers commonly revise initial testing results for findings; this is a normal part of scientific procedure when deemed necessary. For example, South and Central America are still termed “New World” countries, the underlying assumption being that their development must postdate that in the Near East.

However, increasing amounts of controversial data are being found both in the Americans and in parts of Asia. Such evidence is tested with a variety of technologies, including accelerator mass spectrometry (AMS), which is in essence an upgraded form of radiocarbon dating.

AMS can accurately date samples as small as a single grain while detecting and reducing errors from fossil displacement. This can be especially useful when a sample (say, of bone or seed) has a different date than that of the strata. However, even with the latest technology, much of the seed remains found are so severely carbonized or decomposed as to make it extremely difficult to determine whether a sample is wild or domestic.

Carbonized seed remains are a common source of agricultural evidence. The process of carbonization occurs when organic compounds are subjected to high temperatures and converted into charcoal. While this process does preserve remains for reliable analysis as to composition, it also causes morphological changes that can make it difficult to determine wild varieties from their domestic counterparts. Among grass seeds, there is also the problem of trying to determine the relationship, if any, between the wild grasses (emmer, einkorn, and barley) of 10,000 years ago to those of the present. Wild stands still grow throughout the Fertile Crescent and beyond.

The Independent Location Theory

While ancient plant remains have been extensively studied in the Near East, such is not the case in the “New World.” Plant domestication research in Mexico and South America involves about a half dozen cave sites.

In Mexico, samples of squash seeds and beans dating around 7,000 to 9,000 BP (“before present,” meaning before the radiocarbon baseline of 1950)1 have been found in the deepest strata in some of these caves. Domestic squash seeds found in a cave at Oaxaca, for example, were dated at 9790 BP—the oldest date of any domestic plant species found in the New World. Testing was based on dating a charcoal sample found next to the seeds; because of the extreme antiquity of the date, the seeds’ age was immediately cast into question. It was suggested that the seed samples had somehow been displaced downward from the upper level of the cave; or alternatively, that the charcoal sample had somehow been displaced upward from the deeper layer.2 Both explanations are possible, yet one cannot help but wonder why experts feel compelled to resort to such elaborate reasoning when the discoveries occur in a location so far removed from the established Near East cradle.

The Mexican sites, furthermore, are not alone in this. The people of other ancient civilizations from the Peruvian highlands, China’s Yangtze River valley, and parts of Egypt, India, and Papua New Guinea, all may also have domesticated plants dating back as far as those of the Fertile Crescent. However, the excavations for evidence of agriculture at these locations are still in their infancy, and cannot yet be compared to the extensive findings in the Fertile Crescent.

Another part of the world with a long history of agriculture is South Asia, where a wide variety of annual and perennial forms of “wild” or “free living” rice survives today without human intervention. Not too many years ago, domestic rice was thought to have a history of between 1,000 and 2,000 years; current findings have pushed its origin back much further. The recent discovery of a handful of rice, found in the village of Sorori in central Korea and dating back 15,000 years, strongly suggests that an agricultural practice here coincided with or even preceded that of the Fertile Crescent—where agriculture is still held to have originated.

The age [of the Sororian rice] challenges the accepted view that rice cultivation originated in China about 12,000 years ago. . . .The region in central Korea where the grains were found is one of the most important sites for understanding the development of Stone Age man in Asia.3

After thousands of years of cultivation, it is difficult to establish the identity of the original wild progenitor of domestic rice. Researchers struggle with whether present “free living” rice is truly wild, a cultivated escapee, or something between: cross-pollination, genetic exchange, expanding landscapes, and shrinking natural habitats have distorted genetic qualities between wild and domestic species. “Weedy” forms of rice have also evolved over time, escaping into unmanaged natural habitats, flourishing at the edges of agricultural landscapes, and exchanging genetic material with both wild and cultivated varieties.

Even as they wrestle with the problem of potential multiple domestication sites, researchers are also faced with this paradox of the origins of agriculture: Why did hunter-gathers begin domestication of plants in areas with ample resources of wild foods? Thus, experts today still cannot state conclusively where plants were first domesticated and agriculture began—and the very hypothesis that it began because of hunter-gatherers’ need for a new food source is under challenge as well.

Classification, Morphology, and Genetic Testing

When determining whether a plant should be classified as domestic, scientists look for large and fast-sprouting seeds, glume (grain hull) adherence, and strong rachises (the part of the grain that attaches the seed to the stalk). These traits are considered markers of domestication because, since they are naturally selected against in wild species, they could evolve only under cultivation.

Large seed size, for example, is usually considered a marker showing an adaptive response to selective pressures relating to domestication. The hunter-gatherer’s deliberate planting of slightly larger, pre-selected seeds from wild stands into seedbeds rather than into the plant’s natural wild habitat is believed to eventually cause morphological changes in the plants, resulting in larger, domestic-type seeds. By selecting wild mutant seeds with thinner glumes and stronger rachises, early hunter-gatherers were able to build up a seed supply of mutant seeds from wild stands over time. It is from this supply of stored mutant seeds that domestic cereals are said to have originated.

It is important to know that, even with the multiple scientific disciplines used to study agricultural origin, the sources of evidence vary considerably in reliability. The three important founder grains from the Fertile Crescent—emmer wheat, einkorn wheat, and barley—are the earliest examples known to be located near their wild relatives. There are several species growing today in the same area that are viewed as possibly being the original ancestors of these domestics.

However, after intensive study of morphologies and genetics, including analyses of plant proteins and interfertility testing, we are often still perplexed by a wild progenitor to which the domestic species appears morphologically identical but with which it has no genetic compatibility. To illustrate this “looks can be deceiving” aspect, Daniel Zohary states:

A special case of species diversity is provided by “sibling species,” that is, taxa so similar morphologically that it is very difficult—or even impossible—to distinguish between them by their appearance; yet, crossing experiments and cytogenetic tests reveal that they are already effectively separated from one another by reproductive isolation barriers such as cross-incompatibility, hybrid inviability, or hybrid sterility.4

A well-known example of sibling-species relations is that of wild and domestic emmer wheat. Triticum araraticumm, one species of wild wheat, is morphologically indistinguishable from domestic emmer wheat. However, all attempts to cross breed the two have failed, thus proving that the former was not in fact the progenitor of the latter. Furthermore, a true ancestor, morphologically different from emmer wheat yet with identical chromosomes, was found and successfully interbred, thus linking it credibly to emmer as a potential wild progenitor.

Wild progenitors used to be classified as separate species from domestics but are now ranked along with the domestics as a separate subspecies. For example, domestic emmer, Triticum turgidum, is the subspecies dicoccum. Its suggested wild ancestor, once called Triticum dicoccoides, is now classified as Triticum turgidum dicoccoides. What are the most obvious differences between them? The domestic grain somehow got larger, and the rachis got tougher and less brittle.

But are these variations, together with the fact that interbreeding was successfully accomplished under laboratory conditions, enough to identify Triticum dicoccoides as the wild ancestor of domestic Triticum turgidum? Or is there a danger here of leaping to simplistic conclusions?

We must remember that numerous factors, such as changes in climate or animal and human intervention, have influenced genetic variations and diversification among the wild progenitors over thousands of years. While it is generally believed that the wild progenitors of most cultivated plants have been satisfactorily identified, many researchers recognize the need for more data.

The simple identification of a morphological change does not, in itself, constitute adequate documentation of a plant species having been brought under domestication. Linkage must be provided between the observed morphological change and a set of causal behavior patterns. It is not enough simply to document phenotypic change. It is also necessary to explain why such change appears in response to a newly created environment of domestication.5

The example of wild and domestic emmer, Triticum turgidum, may fit most additional criteria for domestication, being that both the wild and domestic emmer could successfully interbreed and had identical chromosomes. Yet is it not possible that the putative wild ancestor of emmer could in fact have once been a cultivated escapee itself, one which then adapted to a wild environment over thousands of years?

Another example is the fragments of emmer wheat dated 9,500 BP from the southwestern tip of the Fertile Crescent at Jericho. Evidence as to whether the fragments are wild or domestic is still inconclusive. Other samples of emmer dated 9,700 BP and found just north of Jericho near Damascus, however, are domestic.6 Keeping in mind these specimens are thousands of years old and have been through extreme changes, is it not possible that, again, what are thought to be wild samples of emmer are simply genetically altered cultivars, that is, a once-cultivated subspecies that has since run wild?

In order to consider this possibility, we must reexamine the common assumptions about our earliest agriculture origins: could these “origins” in fact be examples of re-emergence from previous cycles of civilizations?

Without giving this consideration due weight, we are left with the mysterious appearance of numerous species of grasses, some of which share similarities to cultivated grain species both genetically and morphologically. One could argue that the dates of our examples fit the conventional time line (10,000 years for domestication), yet these are only a few examples of what has been found.

The recent and totally unexpected find of several grains of morphologically domestic emmer wheat at the Palestinian site of Nahal Oren also raises the possibility that grain was under cultivation as early as 14,000 BC.7

An archeological site called Ohalo II in Israel reveals 19,000 well-preserved grass grains. Among the specimens are pieces of wheat and barley dating 23,000 years ago8—about 7,000 years older than the Nahal Oren samples cited above! In light of findings such as these, it seems quite possible that many wild progenitors could be cultivars from a civilization or civilizations predating the orthodox theory for agricultural origin.

What are often called wild progenitors of domestic grasses may be suspect for other reasons. Several other sites in the Fertile Crescent have combined specimens of wild and domestic emmer, einkorn, and barley. The mix of wild progenitor and domestic is often interpreted as signs of early cultivation from wild to domestic. However, these may simply be examples of separate food stores for ruminants and humans. And while animal domestication does not happen until around 8,000 BC, according to orthodox timelines, it is still possible that a sufficient condition of pre-animal husbandry existed to account for wild grass harvests.

Cultivars and Wild-Growing Domestics

Einkorn wheat represents another perplexing example of early wild and domestic plant research.

The present-day northern portion of the Fertile Crescent yields broad bands of wild einkorn, yet research has designated the wild progenitor of domesticated einkorn as being restricted to a small region near the Karacadag mountains in southeast Turkey, far removed from the northern broad bands of wild einkorn. If the northern stands of wild einkorn are not the progenitors of domestic einkorn, then what are they? Could they be a once-domestic species that ran wild at some distant period of prehistory, eventually having adapted to their present environment?

It is believed that hunter-gatherers living in permanent settlements were harvesting a species of wild einkorn 11,000 year ago along the Euphrates River.9 If hunter-gatherers were already harvesting by that time, perhaps they had been harvesting it for thousands of years before that time. What species of wild einkorn was this? Was it the progenitor of domestic einkorn, the species found in the Karacadag mountain region? Or was it another species, like the one representing the broad bands of the northern regions, a species that never became domesticated?

For that matter, what about the modern wild einkorn found in the area comprised of Israel, Lebanon, southwest Syria and Jordan? This Palestinian variety has large seeds, often larger than those of domestic wheat.10 Could these, too, be feral crops that were once cultivated in antiquity and have now adapted to the regions? Large seed size is considered a marker of domestication—yet this wild species has seeds larger than most domestic species.

As long as we are focused on the Fertile Crescent, let us consider the origin and introduction of barley, the third founder crop of this region.

Two types of domestic barley have been recovered here from early settlements. It has been suggested that hunter-gatherers harvested wild barley before domesticating two-rowed barley, followed shortly afterwards by six-rowed barley. Between these two types, two-rowed barley shows more of the wild barley characteristics; both two- and six-rowed domestics have been found together in early settlements.

Wild barley, like wild einkorn and emmer, develops brittle rachises for dispersal when fully ripened. These rachises are segmented so individual spikelets and grains can be shed from top to bottom when ripe. Only about five to ten percent of the rachises are semi-tough in wild barley, and this small percentage represents the average amount of seed that is held to the stalk at the time of maturity.

According to theory, early hunter-gatherers selectively chose seeds from these specific stalks at an early stage before ripening; they did so because even if the five to ten percent of rachises held their seeds, at maturity they would immediately fall to the ground when pulled by the hands of humans. The hunter-gatherers (so goes the hypothesis) would have saved these partially ripened seeds for planting stock.

In order to be motivated to do this, these hunter-gatherers would have had to believe that these wild grass seeds, after being planted in homemade seedbeds, would produce larger, more stable seeds and larger yields after a few generations. Are we to assume that they knew what the outcome would be before they tried it? And are we to further believe that these wild grasses could genetically morph into domesticates through simple cultivation and planting techniques, when it has still not been demonstrated today, nor is there any evidence that such a demonstration is possible, that a wild, mutated seed can be transformed into a domesticate through cultivation in a foreign seedbed?

As with emmer and einkorn wheat, it is not uncommon to find wild and domesticated barley fragments together in archeological sites. In areas of the Fertile Crescent, fully cultivated emmer wheat and two-rowed barley have been recovered from ancient sites, accompanied by wild-weed einkorn, ryegrass, and other weeds considered pre-adapted to cultivation. It is still highly questionable whether or not the selective pressures imposed on wild grasses, as suggested by the cultural evolution model, caused the morphological changes that resulted in domesticated varieties of cereals.

Early hunter-gatherers were just as highly attuned to their food sources as modern day hunter-gatherers. With hundreds of thousands of years’ experience in finding food, knowing which plants to eat, observing animals in their natural habitats, and incorporating some of these habitats into daily life, it is difficult to believe that these people, who hunted and ate ruminants, were ignorant about the wild grasses eaten by these animals. After all, countless generations of hunter-gatherers used wild grasses for bedding, weaving baskets, and fuel. Could these tough, brittle, wild grains really have been food for these early people, as suggested by some leading specialists?

While there is plenty of evidence for wild grain harvest, there is actually little evidence supporting human consumption. Evidence for the latter is restricted to a few Paleo feces found in caves. The location and lack of evidence would suggest that a famine or climatic disturbance might have been in effect, causing the humans to hole up in the caves until it was safe to venture outside. If this were the case, the usual foods may have become scarce, causing those people to eat whatever they could find. (We must also consider the possibility that Paleolithic peoples were able to process wild grasses, rendering them digestible and fit for human consumption, without the pottery to soak the grains or cook them, but this possibility is quite slim.)

Proteins can be useful genetic markers for distinguishing wild ancestors from domestics. Shared genetic characteristics, if found, can reveal the wild progenitor of the domestic. However, this methodology is difficult to apply if the wild progenitor no longer exists, as is often the case, leaving us with hypothetical ancestors that must have been the progenitors of existing wild species.

Cross-pollination, genetic exchange, and environmental changes have blurred the lines between wild and domestic varieties over thousands of years. Along the way, opportunistic weeds of many varieties have joined the mix and contributed to new gene pools. In essence, it becomes increasingly difficult to determine whether the domestics came from weeds or the weeds came from the domestics.

A good case in point is teosinte, a diverse group of wild grasses native to Mexico, Guatemala, and Honduras.

Teosinte is suspected to contain the progenitor of domestic maize because the two are genetically compatible and successfully crossbreed through repeated hybridization in fields. They differ, however, in the morphology of the female ear. The few small seeds of teosinte husks look nothing like the large, fully seeded ears of maize. Teosinte has numerous branching stalks, each culminating in a few small, shattering seed spikes. Corn, (maize) on the other hand, is a single stalk containing an ear of tightly arranged, rowed seeds that cannot disperse naturally.

Because of its unique makeup, some experts believe teosinte to be a descendant of domestic maize; most agronomy books and relevant literature see it the other way around, and present teosinte as the wild ancestor of maize. Yet regardless of which direction one subscribes to, teosinte-to-maize or maize-to-teosinte, how such an extraordinary transformation could have taken place in the remote past at all is an inexplicable mystery.

Many varieties and sizes of domesticated corn have been found in deep levels of caves throughout Mexico, revealing the extensive knowledge of plant genetics and breeding techniques among early inhabitants of Mexico and Peru. A comparison of proteins between teosinte and domestic maize reveals some similarities, and no species of wild maize has yet been found. Some teosinte types have been categorized as subspecies, yet there are no morphological indications of their transformation into domestic maize.

With all our current technology, it seems reasonable that we should be able to create a domestic species from a wild one in a controlled environment, simulating an early hunter-gathers’ planting methods—if that is indeed what happened. What would it take? In addition, if this would prove the prevailing theory of wild mutant seed transformation, why haven’t we yet done it?

Identification of chromosomal affinities between wild and domestic crops is another method for finding wild progenitors. If cultivated crops show full homology and interfertility with a wild species from the same genus, then that wild species could be recognized as the ancestor of the crop. This may be misleading, though, because chromosomal affinity does not necessarily determine ancestry. This is especially true when there are wide variations in morphology, as is typical with many grain progenitors and their domesticated offspring.

An obvious advantage of domestication traits is that they evolved only under cultivation and are strongly selected against and absent in the wild.11

If this is true, it should be easy to reverse the process and produce wild, “shattering” crops from domestics once the specific gene sequence is found. (“Shattering” crops are those wild forms whose seeds drop to the ground upon ripening, rather than adhering to their stalks, as do the seeds of domestics.)

…crosses between wild progenitors and the cultivars have shown that this shift is brought about by a recessive mutation in one major gene or (more rarely) by a joint effort of two such genes. In all these crops, breeders have also performed many intra-crop crosses (between cultivars). Except for barley, none of these within-crop crosses has been reported to produce wild-type brittle or dehiscent…12

It would appear that our ancestors were able to “tweak” that single gene from wild grasses so that it could not be reversed. Only domestic barley, with its two independent recessive genes, has successfully produced wild type, brittle grains and these are still different from the “wild species.”

Aside from wild chenopod pseudo-cereals that shed their seeds in a couple of days at maturity and can be husked by simple rubbing and winnowing, the idea of pre-agricultural peoples regularly consuming wild grasses (progenitors of einkorn, emmer, barley, rye and spelt) as promoted by some researchers may simply be an attempt to promote and maintain the cultural evolution theory as applied to plant domestication. The premise that Paleolithic humans ate wild grasses that may have led to the eventual domestication of the wild species also supports this gradual-step theory. This is not unlike the theory that seed plant cultivation followed other vegetable plants. Evidence for hunter-gatherers cultivating propagated vegetables before seeds is lacking, but the theory of a gradual-step process comfortably fits the current paradigm.

Could these grass species of einkorn, barley, and emmer, so often suggested as the wild progenitors of their modern day domesticates, be something other than wild?

Based on the hypothesis that over thousands of years a plant could experience numerous morphological changes, is it not possible for a once-domesticated plant to revert to some semblance of a wild version? We have already mentioned how it has been suggested that wild grass species, once cultivated, could morphologically transform within 300 years when transplanted into seedbeds. An example of this morphological change could appear as brittle rachises becoming “semi-tough” enough to be identified as cultivated.

While this may be possible, it raises another question: could other important markers (thinner glumes, larger seeds, greater adaptation to climate and soils, resistance to diseases and pests, etc.) that resulted from selection pressures and were found in domestic species also have morphed along with the rachises, or did some of these traits occur earlier and others later?

Some of these developments are major adjustments to a wild grass species involving genetic manipulation at some level in the process, and there is no indication these markers, not to mention increased nutrition and faster sprouting time, could have occurred consecutively or simultaneously over a few hundred years by being planted in seedbeds, even if the seeds were carefully selected, wild, mutant seeds. Granted, some hunter-gatherers from the epi-Paleolithic period knew a great deal about the growing cycles of plants (and even about seed planting and cultivation to some degree), but the genetic manipulation of a wild grass species into a productive, nutritious offspring is something quite extraordinary.

The question thus remains: who were these people and how did they know how to manipulate plants at the genetic level? Evidence at many archeological sites indicates that the knowledge for plant domestication was already there and was not an evolutionary process.

The idea that many of these “wild” species of cereals are actually cultivars is a realistic consideration. Edgar Anderson addresses this important issue in his book Plants, Life and Man. He suggests that we consider previous cycles of cultivation when examining what we think are “wild relatives” of our basic food crops.

This is indeed a consideration for researchers, as it is now well known that some species were in fact cultivated before the time they were once thought to have originated. Corrections in origination dates, along with genetic mixing of wild and domestic crops, environmental pressures, and time can realistically contribute to de-evolution of a domestic species.

An example Anderson gives, of how one might encounter in a jungle a smaller version of a cultivated fruit, giving the first impression that it is a wild relative of the domestic version, is an all-too-common occurrence. While it is possible that what you are witnessing is a wild food, it has been repeatedly shown that many of these wild-appearing foods are remnants of refuse heaps, a seed spit out of a hunter’s mouth after finishing his lunch from home where he cultivated the fruit, or a garden escapee. I have personally encountered wild-growing samples of cacao, coffee, papaya, avocado and other familiar varieties while in the remote jungles, far from any agricultural base, of South and Central America.

Anderson also points out the great variations among wild-growing domestic avocados in Central America. Such variations appear to an even greater extent among avocados presently growing under managed cultivation. He brings to attention the fact that apples appear in pastures, forests and fields throughout the country, yet none were here in America when the first European colonists arrived. Apples are likely from Asia, where various species are native. We do not know how much of a connection the wild-growing apples have with previous cycles of cultivation, but they are, without question, examples of cultivated apples that have run wild. The same is likely true for many “wild” relatives of cereal grains. At my home in Vermont, we have three apple trees and two pear trees on our land. We were the first on record to build on this particular spot yet, although we did not plant the trees, they are not wild fruit trees.

Wild weeds are highly successful plants that can easily overcome a disturbed habitat, as evidenced in most gardens by weed races commonly found among domestic annuals and perennials. Early hunter-gatherers, like their modern counterparts, are known for having collected and stored a variety of wild seeds. Most of these seeds are known for specific uses, such as food or medicine. But what evidence is there that pre-agricultural peoples actually used wild grasses for their own consumption?

Jack Harlan, an authority on agricultural origins, was able to prove that a small group of people, within a period of just three weeks, could harvest by hand enough wild grain to sustain themselves for one year. To some, this classic study suggests that our ancestors did the same. However, it does not prove that they did nor answer why they did it if they did. Were harvests for pre-domestic ruminant consumption, or for some other highly useful purpose?

Recently, a team of international scientists found fields of wild einkorn wheat in the Near East that provides the closest genetic match to domestic einkorn. By obtaining DNA samples of 68 separate lines of cultivated einkorn, all samples were found to be closely related. DNA profiles were also taken from 261 separate populations of wild einkorn in the same area. Of the 261 wild samples, 19 from the volcanic region of the Karacadag Mountains in Turkey were distinct from the other wild einkorn lines. Further analysis showed that 11 of the 19 samples had a close phylogenetic similarity to the cultivated einkorn. As a result, these 11 wild samples could be identified as modern descendants of the wild progenitor for einkorn wheat.13

Note that these wild samples were identified not as wild progenitors but as descendants of a wild progenitor, based on their similarity to the domestics. But how can they credibly be seen as descendants of a wild progenitor if we do not know where or what the wild progenitor is? Phrases such as “similar to,” “related to,” “descendants of,” and so on imply a link to some long-lost original strain of wild grass that, through a series of mutations, became the domestic grain we know today. Yet, in many cases, there is still no actual progenitor.

Evidence does strongly suggest an area for the earliest domestication of einkorn wheat, but, like so many other domestic plants, the wild progenitor remains elusive. What we have are suspected descendants of these elusive wild progenitors, much like the situation in the study of human origin with its search for the elusive “missing link.”

The Process of Cultivation and Other Theories

The presence of grinding stones, sickle blades, and storage structures in many early hunter-gatherer sites indicates a long reliance on wild seeded plants, particularly wild grasses. Refinement of harvesting and cultivation techniques by selectively choosing plumper seeds eventually transformed fields of grain into crops with thinner husks, stronger and less brittle rachises, stalks with increased seed clusters, larger and more dependable yields after harvesting and threshing, increased nutritional value, and spare seed for storage. These newly cultivated crops could have eventually replaced their wild counterparts in importance. After much trial and error, these once-wild grasses, first through careful selection of suitable wild seeds and later through repetitive cycles of sowing, reaping and harvesting, became domestic crops fully dependent on human intervention.

Some archeobotanists believe morphological changes, which include changes in size, shape and form, could have taken place anywhere from 100 to 300 years after the first time a seed was planted in a seedbed by early hunter-gatherers. Others believe it may have taken longer, up to 1,000 years. This is an interesting hypothesis that appears to be based on sound evidence, albeit interpreted though the theory of cultural evolution. Nevertheless, it is a hypothesis—not a fact. The evidence is therefore open to interpretation from alternative perspectives as well.

In Origins and Seed, Gordon Hillman discusses cultivation as a precursor to domestication and suggests that cultivation in the Jordan Valley could have started as early as 12,000 BC. He further states, “However, detecting the start of cultivation will, as ever, be problematic.” The reasons for this, says Hillman, are that “cultivation prior to domestication can be recognized only from indirect evidence, not from the remains of crops themselves” and “domestication itself is often difficult to detect.” Further influences in the process would include unripe harvesting and genetic infiltration of wild genes from neighboring populations of wild grasses.

Indeed, even with the most rapid domestication, it is inevitable that “modifier genes” would have ensured that the crops continued to contain an admixture of wild forms for many centuries … This effect, combined with the inherent problems of distinguishing wild and domestic cereals from charred remains [archeological records], ensures that detection of domestication in the archaeological record will continue to be extremely difficult.14

So, why cultivate in the first place? Why spend centuries planting something that will not produce the desired result for generations? Furthermore, when the plant finally does reach its full potential its product becomes a causal factor, according to many historians, in both the creation and downfall of civilization. Authority Jack Harlan nicely sums up the scientific position on the question of cultivation:

What does planting and reaping, planting and reaping, that is farming, do to the genetic architecture of annual seed crops? Most of our answers to this and similar questions have been intuitive or simple guesswork.15

Again, while there is no doubt that wild grasses played an important role in the lives of hunter-gatherers, it may not have been for food.

What about those 261 “wild” samples from the Fertile Crescent, only 19 of which have genetic similarities to domestics? Could these be additional examples of cultivars that have morphologically reverted to their present status after running wild some thousands of years ago?

Research has shown that some early hunter-gatherers from the Fertile Crescent practiced what is called vertical transhumance, wherein groups of people would seasonally move their campsites from low elevations to higher elevations in the spring to harvest ripening wild grasses and to hunt wild goats and sheep that followed these ripening grasses. If we remove the cultural evolution model as an interpretation for this scenario, we are left with typical pastoralists herding their flocks to ripening grasses. Admittedly, this would be at a time well before they are believed to have had domestic animals—but the truth of the matter is that, as with our inconclusive results concerning plant domestication, we really do not know when animals were first domesticated.

The majority of researchers still either hold to the Fertile Crescent theory or believe that plant domestication began independently in several parts of the world within the last 5,000 to 10,000 years. Both perspectives depend on the cultural evolution theory for their basis. Either orientation posits a long period of experimenting by hunter-gathers with wild grasses and roots predisposed to domestication before agriculture appeared on a large scale.

But is it possible, at least in some of the major areas where agriculture began, that plant domestication did not happen through this evolutionary process of human experimentation? Although it specifically addresses contact between the hunter-gatherers and early farmers of central and northern Europe, the editors of Last Hunters—First Farmers offer another suggestion that could easily be applied to any number of other locations where agriculture began:

The origin of agriculture involves only a very few places in a few brief moments of time. The spread of agriculture is the primary means through which farming has become the basis of human subsistence. It would seem essential to keep both colonization and adoption, and the kinds of evidence and questions that they involve, in mind in any discussion of the transition to agriculture.16 [Emphasis added.]

Conclusions

Agricultural origins cannot at present be conclusively proven to have begun close to 10,000 years ago when additional evidence for agriculture extends further back in prehistory. What can be unequivocally stated is that agriculture had already emerged several times in numerous parts of the world in the last 12,000 to 20,000 years, and possibly as early as 50,000 years ago, with the last 6,000 years producing the most evidence for this cultural phenomenon.

New findings challenge the hypothesis that humans first began as hunter-gatherers and later evolved to agriculturists some 10,000 years ago—a hypothesis that at present has no solid basis in proof, yet is readily believed by many.

Genetic manipulation of plants, particularly cereal grains, occurred at some point in prehistory by people who already had the knowledge to do so. These same people created a vital and lasting human food source, no doubt for very specific reasons.

In each of the major areas of the world where plants and animals were domesticated, we find legends, both written and oral, describing the origin of agriculture as a gift of the gods, culture-bearers who taught indigenous peoples agriculture and the sciences of civilization. (I have written about this elsewhere, in an article soon to be posted on this site.) Could this possibly be coincidence, the accident of mere imagination?

Our ancestors left us more than bones, seeds, stone tools, priestly cults and ritualistic incantations to exotic gods—they left us examples of extraordinary feats of engineering, architecture and sustainable methods of agriculture. They left us legends, myths, epics, and sagas. Isn’t it about time we hear them out?

1. A Dictionary of Quaternary Acronyms and Abbreviations, [url=http://www.scirpus.ca/cgi-bin/dictqaa.cgi?]www.scirpus.ca/cgi-bin/dictqaa.cgi?[/url] Option=b; May 5, 2004.
2. Smith, Bruce D., The Emergence of Agriculture; Scientific American Library; NY, New York; 1998, p. 165.
3. Dr. David Whitehouse, “World’s ‘Oldest’ Rice Found,” British Broadcasting Corporation News (BBC); October 21, 2003.
4. Harris, David R. (editor), The Origins and Spread of Agriculture and Pastoralism in Eurasia; UCL Press, Ltd.; London, England; 1999 (paperback edition), p. 151.
5. Price, T. Douglas and Gebauer, Anne Birgitte (editors), Last Hunters—First Farmers; School of American Research, Santa Fe, NM; 1995, p. 198.
6. Smith, Bruce, cf. ante, p. 60.
7. Settegast, Mary, Plato Prehistorian; Lindisfarne Press; Hudson, NY; 1990 (paperback edition), p. 3.
8. [url=http://www.scientificamerican.com]www.scientificamerican.com[/url]; June 22, 2004.
9. Smith, Bruce, cf. ante.
10. Harlan, Jack, The Living Fields; The Press Syndicate (University of Cambridge), Cambridge, U.K.; 1995 (paperback edition), p. 95.
11. Harris, David R., cf. ante, p. 154.
12. Ibid., page 154.
13. Smith, Bruce, cf. ante, p. 47.
14. Harris, David R., cf. ante, p. 194.
15. Harlan, Jack, cf. ante, p. 34.
16. Price and Gebauer, cf. ante, p. 126.

Reconsidering Human Origin

While the age of information progresses at an ever-increasing speed, we Homo sapiens are faced with extraordinary challenges individually and as the reigning species on this planet. More than ever before, new discoveries in the scientific fields of archeology, astronomy, anthropology, geology, and genetics are challenging scientific theories that have served to shape and form our fundamental beliefs about who we are and how we arrived at our present position. Most jeopardized is the famous theory of human evolution of Charles Darwin, the 19th century British naturalist who has shaped scientific and academic thought for the past century and a half.

The Ascendancy of Darwinism

In 1859 Darwin wrote The Origin of Species, a seminal work in which he theorized that simple life forms developed into more complex ones through gradual, ascending steps. These steps were the basis of evolution and were controlled by “the survival of the fittest,” which Darwin referred to as the “Theory of Natural Selection.” Through this process, a species improved or evolved by adapting to environmental pressures in order to survive. To most people it was known simply as “The Theory of Evolution.”

Shortly after publication, Darwin’s theory rapidly gained the support of some intellectuals and educators, but many critics rejected some of its controversial ideas. Among the detractors were fundamentalist Christians whose “creationist” views Darwin’s theory flatly contradicted. However, while believers in the Biblical creationist model landed a few crippling blows to the evolutionists, their own acceptance of the Biblical explanation of creation was based primarily on faith, not science. For the most part, proponents of human evolution found nonscientific, religious beliefs to be of little relevance in discussions of human origin, regardless of whether the particular views represented Christianity, Judaism, Hinduism or any other brand of faith.

Over the years, some of these critics’ more scientifically based attacks on Darwinism raised quite a few pertinent issues, causing Darwinism’s supporters to be saddled with the burden of proof—something they have yet to produce. Still, the evolutionists’ camp, increasingly thought by many as a faith-based religion of its own sort, managed to dominate most facets of education under the banner of “righteous science.” The media and most educational systems in Europe and America presented evolution to the public and students as though it were fact. A conjunctive theory of cultural evolution also permeated our society so deeply that anthropology, paleontology, botany, and many other sciences based most of their findings on it. Cultural evolution even became the basic theory for rationalizing almost every dietary trend from raw foods and vegan diets to high-protein and low-carbohydrate diets.

One compelling reason for Darwinism’s powerful hold over scientific and cultural thought has to do with the larger cultural context. The formative years of academic studies in human history were marked by the prevalence of racism and Eurocentric attitudes. Today, while these social conditions have markedly improved, many of the concepts they influenced continue to endure. In most universities throughout the world, students are still taught the theory of African genesis: that we evolved from the trees of the African savanna from ape to primitive African savage, and that all prehistoric peoples, whether simple hunter-gatherers or agricultural peoples, were evolving savages who eventually culminated into modern-day Homo sapiens, the epitome of creation.

Although the term “savage” is no longer used to describe the often-brilliant peoples of prehistory, only recently have our ancestors acquired the true recognition they deserve. Yet, because of early biased viewpoints that defined our textbook perspectives of human prehistory, it is still often difficult for many to perceive our ancient ancestors in any other way than as primitives, who evolved to the point of becoming “civilized” only around 10,000 years ago. For many, this colorless viewpoint still prevails today.

Dogma and Heresy

While there is a wealth of information compiled over the last century on hunter-gatherers, supposedly the forebears of agriculture, confining agricultural peoples to the stone-age ancestry of the Neolithic period has led to much confusion—for there is a vast amount of evidence that suggests otherwise. Our limited perspective is especially confounding when we attempt to study dietary traditions. Undoubtedly, there have been stone-age peoples, but to say the majority of human history belongs to a period defined as “stone age” wherein all peoples are primitives may very well be focusing in blindly on but one part of a far greater, more extensive story.

Evolutionists often point to the spurious fossil records as evidence that mutation and natural selection can prove how evolution happens. Examples include the transitions of single-celled organisms to fish, fish into amphibians, amphibians into reptiles, and so on to mammals—even though these particular models are no longer considered valid.

You may recall the grade school history book image of a series of creatures emerging through an evolutionary process into a man. Many people are unaware that most scientists no longer recognize this model as valid, although it can indeed still be found in many history books. Furthermore, while most people are aware that there is a “missing link” between the ancestral apes of chimpanzees and humans, and may even be aware that the search for this elusive link is ongoing, few realize that there are many such “missing” evolutionary transitions for animals and plants in the fossil record. These transitions, said to be the results of mutations or natural selections, simply do not show up in the fossil record with any kind of consistency.

This is rather strange: According to theory, there should be an abundance of transitional fossils revealing the transformational steps from one species to another. Instead, what the record shows us is fully formed species that seem to suddenly appear, remain consistent in makeup over long periods, and then eventually disappear. With the exception of a few species, such as sharks and crocodiles that continue to live as they were, living creatures are replaced by what appear to be entirely new and different species. This is true for both prehistoric animals and non-flowering plants, which are supposed to have evolved into flowering plants about 100 million years ago. Here, too, we find a lack of intermediaries between the two types of plants.

Is it possible that this highly influential theory, one that so confidently explains who we are and where we come from, could be seriously flawed? With new players, including the proponents of the “intelligent design,” “lost continent,” and “intervention” theories in the human-origin field, the heated debate on evolution is rapidly coming to a boil. Never before in its short history has this theory faced such intense scrutiny as it is facing today, as the dedicated research of diverse groups of scholars and historians is bringing about a genuine revolution in how we think and what we believe about human origin. Indeed, many believe that these new researchers are armed to the hilt with more evidence against the theory of evolution than those who support it have for its defense. Wielding such compelling evidence, many of these “alternative” historians claim a greater antiquity for civilization than most evolutionists are willing to acknowledge.

Unfortunately, the lengths to which some evolutionists have gone to discredit alternative historians have been at times extreme. For the professional whose work is either directly or indirectly linked to the study of human origin, it is becoming clear that to challenge the theory of evolution is an extremely bold and politically incorrect position to take. Thus, despite the fact that freethinking and open-mindedness are the quintessence of genuine scientific inquiry, the theory of evolution nevertheless has a habit of closing the door to archeological and anthropological finds that do not fit into its ideology. Just as faith is the foundation of religious belief, the need for scientific certainty compels evolutionists to their strong attachments to conventional theory.

Consequently, to challenge the theory of evolution is to be branded a “creationist.” Because the debate has been presented by the media as an either/or issue, with evolutionism and creationism the only two sides of the debate, most believe this is all there is to it—though indeed, nothing could be further from the truth. When the “creationist” label does not conveniently fit the alternative theory in question, the accusation of “pseudo-science” is leveled against the offending alternative historian. However, with increased public awareness and access to new discoveries, the tired ramblings of accepted dogma will continue to be challenged by stimulating, open dialog among those wishing to discuss the highly plausible alternate theories. This article is offered as an introduction and basis for this sort of open-minded inquiry.

Myth or History?

An increasing amount of evidence during the past decade has suggested a history of humankind extending far beyond the accepted textbook timelines. The idea of anatomically correct humans (Homo sapiens) into deep antiquity—civilized, agriculturally-based humans with an understanding of astronomy, geometry, architecture and other more sophisticated sciences—existing so far back in time as to coexist with early hunter-gatherers and other primates is not easy for orthodox historians to accept. To say that the broad acceptance of this idea would have a powerful impact on the scientific certainty of human evolution is a grand understatement. Indeed its effect on evolutionary theory would be earthshaking—not to mention the effect it could have on some organized religions.

For anyone seeking to cut through the confusion as to what to believe about human evolution, I recommend reading Forbidden Archeology by Cremo and Thompson; Shattering the Myths of Darwinism by Richard Milton; and Evolution, Creationism and other Modern Myths by Vine Deloria, Jr. These three books are scholarly works that put human evolution in its rightful place: as a theory in serious need of reconsideration.

Alternative historians have paved the way toward a new understanding of human origins that incorporates the early and current research of orthodox anthropology, paleontology and archeology with other scientific disciplines (e.g., archeoastronomy, engineering, mathematics). Even the written and oral traditions, myths and legends of traditional peoples throughout the world are being openly researched and analyzed for further insights. Fifty years ago, many of these methods were not considered acceptable (let alone standard) methodologies for approaching the study of human origins; today they are proving to be of tremendous assistance in reevaluating early discoveries and aiding in the interpretation of new ones.

A famous case in point illustrates the potential validity of historical records previously regarded as “pure imagination.” Over a hundred years ago, a seven-year-old boy named Heinrich Schliemann, enamored by pictures of the mythological city of Troy, determined that one day he would find the lost city. In 1873, Schliemann discovered the site of Homer’s Ilium right in the location where it was said to have been, a discovery that moved Troy from the realm of myth to that of historical reality.

Apollonius of Rhodes wrote the famous Argonauta, which tells a story about the Greek hero Jason, who made his legendary voyage in search of a “golden fleece” some 3,500 years ago. The story has long been thought to be but one of many Greek myths. However, excavations in some of the areas mentioned in the story have confirmed that the myth was indeed based on real people and real places. Myths and legends, when researched methodically and extensively, have also revealed hidden meanings associated with astronomical and geological events.

Our true history may be just beginning to unfold. With the advent of satellite imaging, NASA has recently discovered lost civilizations in Cambodia, South America, and India. Man-made megalithic structures have also been found off the coasts of Japan and Malta. A recent discovery off the coast of Cuba reveals what appears to be a complex of temples and other structures resembling Mayan architecture. Because of their 2,000-foot depth, these ruins are believed to have sunk around 50,000 years ago! Are these the remains of some of the world’s most ancient civilizations?

One would think that archeologists would be doing cartwheels through the hallowed halls of academia when informed of so many new and exciting discoveries, but such is not the case. Most of the research on these and other mysterious discoveries is being conducted by independent scientists, journalists, and other curious individuals intent getting some answers to long asked questions. If, and when some of these new discoveries prove accurate through current methods of dating technology, this will be further confirmation that human beings were building sophisticated cities and temples 50,000 to 60,000 years ago.

And where there is civilization, there is agriculture.

“Stone Age” Technology

If we accept the theories of human and cultural evolution, our species was not evolved enough to practice agriculture until around 10,000 years ago, and there would have been no large-scale agriculture until around 5,000 years ago, with the advent of the Sumerian culture, often dubbed the “first great civilization” or the “cradle of civilization.”

But recent discoveries of other sites yielding evidence of ancient civilizations contemporaneous with Sumerian (or even earlier) have altered the picture. The highly advanced cities of the Indus-Sarasvati civilization of ancient India, for example, are likely to have preceded the Sumerians and measure an extraordinary 300,000 square miles—many times larger than the Sumerian ruins! (Even Egypt’s 15,000-square-mile area pales in comparison.) Other sites of pre-Sumerian civilizations may exist near the pre-Incan site of the Sachsyuaman fortress, above Cuzco, Peru. No one knows who these megalithic builders were or how they worked, but they possessed a technology that allowed them to cut and fit 400-ton stones into complex, megalithic puzzles. Attempts to fix the date of another ancient site, Tiawanaku, have yielded varying results, ranging from 1,000 to 20,000 years ago. The ruin’s upward displacement, however, suggest that it must be at least many thousands of years old.

At the rate new discoveries are being made, there will probably be many more findings pre-dating Sumerian, perhaps by thousands or even tens of thousands of years. When and where, then, did civilization really begin? In trying to answer this question, we must remember that archeology, paleontology, and anthropology still operate from guidelines designed to fit the scientific dogma of Darwinism, which still today comprises a Procrustes’ bed to which all new evidence is forced to fit—even when the evidence itself may contradict it. Thus release of this information to the public may be delayed by years, and we have to understand that we have only begun to uncover the many secrets of our planet’s history—and our own.

To orthodox historians, the idea of technologically advanced civilizations having existed in deep antiquity is generally regarded as unacceptable because (it is said) there is no proof in the form of technological artifacts. However, this is often a tortuously slanted argument. For example, the detailed machining techniques used on some of the interior blocks of the Great Pyramid and other architectural examples in Egypt clearly exhibit highly technical knowledge—yet the actual machinery to produce these techniques has not yet been found, so even with evidence of the use of advanced technology in remote times, some historians still want to see the actual technological artifacts that produced this evidence.

This seems a fair enough request, if they would join the alternative historians and put their time, energy, and resources into looking for these missing artifacts. Indeed, it would seem to be a matter of scientific principle that they would do so. Yet, something holds them back. Perhaps it is sheer intellectual inertia, or resistance to the difficulties involved in having to adapt any discoveries of ancient technology to the cultural evolution model. At any rate, a likely place to start would be around some of the newly discovered anomalous, man-made structures located under the sea. These submerged cities are likely to hold evidence for the true origins of agriculture as well.

One fact of the times strongly supports such a search: the proliferation and availability of modern scientific technologies. When the politically correct theories on human origin were first formed, a little over a 150 years ago, they were supported by a relatively few scientific fields of study with a limited range of technical methods. Today, with so many scientific disciplines, specialties and technologies at our disposal, it is an especially apt time for re-examining the existing theories to see if they really are worth keeping intact, or in need of significant overhauling.

For example, even minimal research into the mathematical, astronomical, and engineering feats of the great pyramid of Cheops in Egypt, supposedly built a few thousand years ago, reveals an architectural masterpiece that required the stacking of one million stone blocks weighing 2.5 to 200 tons (with some interior blocks weighing up to 200 tons) to a great height with a mathematical precision unequaled anywhere in the world. Yet, to apply orthodox theory, one would have to believe that the people who built it were primitive men, using stone tools and a jury-rigged apparatus of ropes and logs!

Their purpose in building this great structure is believed to have been to create a tomb for a deceased pharaoh whose life was based in a polytheistic cult obsessed with superstitions concerning the afterlife. The evolution theory lacks even common sense in this case, yet it is still promoted by orthodox historians. Not only is it irresponsible for professional scientists to promote such an unsubstantiated theory, it is also an insult to human intelligence. If certain scientific disciplines are to claim a monopoly on historical knowledge, then in all fairness, these disciplines should be able to demonstrate how and why their theories are valid.

Even with modern technology, it is unlikely we could reproduce this masterpiece with such mathematical precision. It is interesting that orthodox scholars so often make meticulous demands for proof of advanced technology in antiquity—yet in instances such as this one, their own theories either ignore the current evidence or explain it away, often with absurd explanations.

Bones, art, megaliths, pyramids, and other anomalous structures from remote periods are among the clues found around the world pointing to human occupation lost in the mists of time. Some of these examples are so old that the dates we have been given for them are tenuous at best. The attempt to fit ancient buildings into a time frame of the past 5,000 years is futile when one realizes how clearly evident it is that many of these have been built and rebuilt over previously abandoned structures. One could argue that the famous Sphinx of Egypt, even with an update of 10,000 years ago, does not exceed the orthodox timeline for the onset of agriculture. Common sense suggests that the civilization with the technology and technicians to build the Sphinx must have already been in place well before construction began. How long before? We do not know—but we do know with certainty that these architects were not hunter-gatherers living in “stone-age” conditions!

Egypt is only one of the mysterious ancient civilizations for which the infancy of its art, writing, sculpture, and architecture has yet to be found. These ideas and artifices of civilization had to have had their origins stretching back in time well before suddenly appearing, despite the fact that the evolutionary increments of development are lacking in archeological strata.

In fact, the occurrence of knowledge being won, then lost, then rediscovered “for the first time” is far from uncommon even in our documented history. Columbus’s discoveries of America and Galileo’s pronouncement that the earth was round are two such examples. It is firmly established historical fact that Columbus was not the first to discover America, and the ancient Egyptians, Mayans, and Chinese knew that the earth was round long before Galileo’s time.

Our Agricultural Origins: The Orthodox View

While metallurgy, language, and writing each played a role, agriculture is regarded as perhaps the most influential factor in forming civilization after primate man began to stand erect and walked out of the plains of Africa. Let us take a brief look at the orthodox view of the origin of agriculture and review it from some alternative perspectives. The following timeline incorporates modifications by historians in order to accommodate recent discoveries that deviate from the original theory.

100,000 years ago: Modern humans appear and migrate out of Africa.

60,000 years ago: Humans increase migrations, build simple boats, and begin sea travel.

40,000 years ago: Humans arrive in Australia. Some hunter-gatherers begin to proto-farm. Cave art appears.

30,000 years ago: Humans arrive in the Pacific Islands.

13,000 years ago: Humans walk across the land bridge (Beringia) from Eurasia to America and consume local fauna to extinction, often called the “Pleistocene overkill.”

12,000 to 11,000 years ago: The Neolithic revolution begins.

10,000 years ago: (now pushed back another 10,000 years): Signs of plant and animal domestication appear in the fertile river valleys of modern-day Iraq, Iran, and Turkey. The ice age ends.

5,500 years ago: Architecture, language development, writing, metallurgy, science, and religion appear.

According to this general outline, starting roughly 10,000 years ago, with the glacial ice melting and flooding much of the lowlands, hordes of people headed to higher ground in search of food and shelter. In so doing they had to acclimate themselves to less space and were forced to practice agriculture full time in order to sustain their growing populations.

As farming increased, so did population density and with it the need for more farming, ultimately leading to permanent settlements with communities working together to produce enough food to sustain them through cold winters. While they were reluctant to give up their nomadic lifestyles, converting to agriculture from the hunter-gatherer lifestyle gave these people a sense of security that helped to regulate their lives, eventually enabling them to develop the resources to create civilizations.

Meanwhile, other peoples continued to migrate, consuming and eventually causing mass extinctions of much of the fauna in many locations. As their populations continued to increase, they too were forced to settle and farm.

There are eight areas recognized as the world’s original agricultural regions: China, Papua New Guinea, South America, Middle Asia, Ethiopia, the Near East, the Mediterranean, and India.

Uncertainties in the Theory

This outlines the generally accepted theory; however, additions to this theory are ongoing—and not all historians accept the theory. Viewpoints on any number of issues vary among scholars.

Although this outline begins 100,000 years ago, conservative estimates are that it took hominids some four million years to evolve into modern humans. During most of that time, our ancestral “cousins” were dependent on wild plants and animals for food. Out of such an ancestry, why did some of us, modern Homo sapiens, later become farmers?

Some experts suggest that certain qualities inherent in human behavior were conducive to domestication, namely sedantism, proto-domestication (recognition of species predisposed to domestication), and wealth accumulation. The deliberate planting of stored seed stock is also an action ascribed to our species and not typically found in lower primates. Finally, high population density could be a factor that encourages agricultural development. Yet, in spite of numerous attempts to produce a model using all these factors to show how agriculture started, these ideas have proven inconsistent and have left experts in disagreement.

For example, to name population density as a cause for the invention of agriculture raises more questions than answers. Early peoples would have had other options for dealing with their situation. We wonder why our early ancestors did not kill newborn females in order to maintain population growth, as some modern hunter-gatherers have been known to do. On the other hand, why did they not simply procure greater amounts of abundant wild foods to accommodate their population increase? How do we explain that most proto-agricultural hunter-gatherer groups and many farming communities are sparsely populated?

Alternative historians cite the contrasting lifestyles in Central America, the Yangtze basin, coastal Scandinavia, and the archaic American Southwest to illustrate the inconsistency of the population-density idea. Specifically in America, domestication occurred long after a major climatic change but well before any significant change in population growth. Available evidence also suggests that increasing population only complicates social structure, rather than promoting more agriculture. Yet in spite of these holes in the population-increase hypothesis, some orthodox historians still claim population density is one factor that caused early hunter-gatherers to first begin farming.

The proto-domestication hypothesis postulates two types of hunter-gatherers: the common hunter-gatherers, whose life has changed little; and the proto-farmer hunter-gatherers, who managed their plants, developed more complex agricultural practices, and eventually became full-fledged farmers. This idea conveniently also supports the cultural evolution theory, with its scenario of Paleolithic proto-farmers harvesting large quantities of wild grasses as staple foods.

At present, though, sedantism, proto-domestication, wealth accumulation and population density are all generally recognized as being natural influences on rather than causes of our agricultural origins.

Migration and colonization are prevalent in the historical and ethnohistorical records, but this type of evidence is presently unpopular in archaeology, although they are ideas that are cautiously being reviewed by some scholars. Where rapid changes in material culture, burial customs, and settlement systems coincide with the evidence of exogenous domesticated flora and fauna, archeologists need to reconsider worldwide prehistoric migrations. A recognized example of this exists in southern Scandinavia, where it is evident that agriculture spread through colonization leading to indigenous adoption and other types of innovation. Some anthropologists suggest that something beyond a natural or biological influence must have motivated early hunter-gatherers to turn to farming, especially in areas where resources were already abundant.

The fact remains that we do not know if agriculture led to the origin of civilization or if the process toward civilization was already well underway by settled hunter-gatherers when agriculture began. Updates, revisions, and contrary evidence are all needed to eventually arrive at some definitive answers. There are many “whys” yet to be answered; because of this, the field is wide open for other theories from anyone who cares to submit them.

The Re-emergence of Agriculture: Alternative Theories

Experts are uncertain about what actually happened to humans before and during the ice age. The many old bones and stone artifacts have held some interesting surprises concerning this period. In an attempt to reconcile some of the perplexing evidence found in strata layers, alternative historians have developed two very important theories that are rapidly gaining acceptance by many historians. These theories are known as catastrophism and cultural diffusionism.

Catastrophism

The theory of catastrophism suggests that several global catastrophes occurred at different periods in the remote past, forming geological features of the earth suddenly rather than gradually, through the evolutionary process.

Many historians acknowledge that catastrophic events occurred around 3,200 years ago, though some suggest that it was not on a global scale. However, these events clearly affected many areas of the world. What caused them is still a point of contention among some experts; others point to the likelihood of a comet or other celestial body falling to earth and resulting in volcanic disruption with accompanying tidal waves. Geographic evidence supports this idea.

Current research also indicates that similar catastrophic events occurred about 11,500 years ago and earlier, but on a worldwide scale. For example, most scientists accept that a cometary impact was quite possibly what caused the demise of the dinosaurs millions of years ago. If a small comet or large asteroid shower pelted the earth during the last 50,000 years, such an event could easily have set off a chain reaction of storms, floods, or any number of other destructive natural phenomena.

Over 200 myths and legends from around the world attest to these types of natural disasters. Many climate adjustments occurred from 17,000 to 5,000 years ago. The earth became extremely cold around 12,000 years ago and sea levels rose by 325 feet. Any surviving coastal civilizations would naturally have moved to higher ground.

It stands to reason that before crops can be harvested, agricultural tools have to be developed. The grinding of grains also requires at least a rudimentary stone technology. Most agricultural technology occurs during the upper Paleolithic periods, but in the case of the Levant and other instances, such tool development extends as far back as the middle Paleolithic period. Indeed, evidence of early agriculture, documented by William Corliss, has been found dating 26,000 BC (Solomon Islands); 16,000 to 20,000 BC (Indonesia); around 16,000 BC (Thailand), and 15,000 to 16,000 BC (Egypt). This evidence usually takes the form of tools accompanied by plant parts or residues.

Around the world, many examples of tools required for gathering and processing cereals have been found that predate the beginnings of domestication. There are likely many more examples remaining to be discovered, but ruins of civilization earlier than these dates would be covered in thick layers of sediment or submerged thousands of feet beneath the sea. Reflecting upon these dates, it seems apparent that natural disasters brought a severe halt to agriculture, later to be resumed in conjunction with each flourishing civilization.

The upper Paleolithic period is more richly evidenced with human remnants as compared to the later lower and middle Paleolithic periods, where only the most durable items (such as stone tools) can be found, many earlier examples of which are unrecognizable as to their actual function. Plant matter decomposes over time under the influences of natural forces and, therefore, is lacking in the two earlier Paleolithic phases.

In 1901, a mammoth elephant was found in Siberia, frozen upright with food still in its mouth. Further examination revealed buttercups and other spring plants in its stomach. This could have happened only if the climate had dramatically shifted in a very short time.

It would take some very astute humans to survive the aftermath of the sort of cataclysmic events that have occurred numerous times during the earth’s long history. These disruptions permanently destroyed some species and often left harsh and brutal environments. Any human survivors would have had to carry the burden of responsibility for reestablishing civilization. Caves conceivably became shelter for these enduring survivors, who left some of their sophisticated artwork, unusual artifacts, and precious seeds as proof of their tenancy. Some of these cave dwellers; unable to reestablish their agricultural practices due to severe weather, would have been forced to revert to a hunter-gatherer-scavenger lifestyle. Other prepared survivors may have set sail to foreign lands with whatever tools of civilization they were able to salvage.

Legends of primitive peoples who survived the cataclysm tell about welcoming the “culture bearers” who arrived by sea or descended from high mountain peaks. According to written and oral traditions, many primitive peoples were relieved and grateful to have the leadership and direction from these multi-racial people from afar.

In these instances, the “culture bearers” came in peace, taught agriculture, and helped organize civilization. The indigenous peoples were so enamored by the extraordinary scientific abilities of these strangers that they not surprisingly revered them as “Gods.” (Unfortunately, we also have stories of the European colonial conquests for at least the last 500 years where the “culture bearers” came only to rob, rape, murder, and dominate the lives of indigenous peoples to the point of cultural destruction, including the loss of health and traditional knowledge.)

Eventually, the civilized survivors of an ancient world would again share the planet with surviving hunter-gatherers and hominids of various types. Those who had reverted to a nomadic lifestyle after several generations of extreme difficulties may have suffered a kind of amnesia, having lost the urge to re-create civilization. However, many of these nomadic peoples preserved stories of how their ancestors once lived in great cities with monumental buildings plated in gold and silver, before the world was destroyed by fire and water, in a time when the days were warm and sunny, where grazing livestock and flowing fields of golden grain provided nourishment.

The ancient civilizations of Sumer and Egypt represent just two of the most recent examples of such fallen civilizations rebuilt by ancient survivors of far older civilizations that have yet to be discovered.

Catastrophism, along with its increasing amount of documented evidence, sheds light on many of the questions previously unanswered in the historical records. It is a theory well supported by scientific discoveries, and both oral and written traditions, that helps to explain the many unusual artifacts unearthed from the past that simply do not fit the orthodox timelines or cultural model. It also gives credence to the idea that agricultural practices existed in our remote past.

Cultural Diffusionism

The theory of cultural diffusionism describes the cross-cultural exchange between primitive and advanced civilizations before, during and after what is termed the “agricultural revolution.”

Recent findings of cocaine and tobacco, both believed to have originated in the New World, in Egyptian mummies mark the exchange of goods between the old and new worlds during the times of the Pharaohs. This is one of the few publicly announced discoveries that show ancient peoples were communicating and trading across vast distances. I have personally examined numerous stele, stone statues, and carvings throughout South and Central America and Asia depicting Africans, bearded Semitics, and Asians, all of which had to be thousands of years old.

Sweet potatoes have been found in Southeast Asia as well as Africa; to this day, we are not certain of their origin. Amaranth, a pseudo-cereal used extensively throughout South America, is the same domesticated species used by some of the most remote Tibetan tribes, who have used the grain long before any outsiders came to visit. Peanuts have a long history in both Peru and China; cacao exists in Mexico and India…the list goes on. All of these common intercontinental crops are documented to have existed well before the 1500s.

An abundance of historical literature expresses world travel and trade in ancient times. In addition to artifacts and agricultural products, strong evidence for diffusionism exists in many social and cultural practices, linguistics and genetic markers.

One of the most popular legends of culture bearers comes from the Sumerian culture from around 3,500 B.C. Numerous cuneiform tablets have been excavated from what is now modern-day Iraq. These tablets reveal a detailed account of culture bearers introducing agriculture and civilization to the Sumerians. This early example of colonization became popular among lay people through the Earth Chronicles of Zecharia Sitchen. However, scholars of Sumerian history criticize Sitchen’s work because of the author’s premise that these Sumerian culture bearers came from another planet.

Whether one reads Sitchen, Kramer, Jacobsen, O’Brien or other historians of Sumerian culture, the stories carry a similar theme of colonizers from somewhere else—whether they are postulated as having come from outer space, a sunken continent or evolved hunter-gatherers is beside the point. The fact that they came bearing the already-established gifts of agriculture and civilization is what is significant. The supporting evidence for some of these stories indicates that they have more substance than mere exaggerated myths. The “culture bearers from elsewhere” subject is repeated with slight variations in legends from the Mayans, Incas, Babylonians, Egyptians, Chinese, Indians, and other ancient cultures.

In tracing these legends, with their themes of floods and transplanted agriculture, we find that some of the stories could be relatively recent, while others may stretch back into Paleolithic times or further. Some of these legendary events could reasonably have occurred as long as 40,000 to 50,000 years ago.

After surviving the devastating effects of ice, floods, and other environmental extremes, many ancient civilized peoples, through their tenacious will to continue, became heroes or gods, the culture bearers of the past.

According to legends, men and women of cultural and racial diversity and various statures arrived from the four corners of the earth, descending from mountaintops or arriving by sea to bring the seeds of civilization. They taught the primitive peoples agriculture and created laws. They taught biodiversity so people could work in harmony with the land instead of merely living off it. They abolished cannibalism and human sacrifice. Grain was a new food for many; for others, it was a re-introduction of a food that had been missing for generations. Today, grain universally symbolizes the most sacred food of our ancestors and is used as a tribute to those who brought us from the darkness of primitivism to the light of civilization.

If legendary timelines are accurate, then orthodox timelines need to be re-evaluated to acknowledge that man was an agriculturalist for more than just the past 10,000 years of his 200,000-year hunter-gatherer existence. Orthodox theories have recently begun to incorporate the idea that some early Paleolithic peoples were proto-farmers before the currently defined timeline, as discarding the abundance of this new evidence would make rational scientific thinking appear hypocritical. However, modifying existing beliefs to conform to the evidence without really changing them significantly or genuinely entertaining other possibilities is unfortunately the norm.

In other words, in order to address the issue of agriculture in civilization existing before the accepted 10,000- to 12,000-year period, it is easy to keep with conventional theories by saying:

“At the beginning of the upper Paleolithic period, around 40,000 years ago, there were some primitive peoples in various parts of the world who were managing animals and plants to such an extent that we can consider them the earliest of agriculturists (proto-agriculturists)…”

As opposed to saying:

“Based on all the existing evidence, including research from other countries and historical traditions, from 50,000 to 40,000 years ago until 10,000 years ago, we have evidence of extraordinary cave artworks, delicate medical operations (cranial surgeries and amputations), astronomical notations, ceramic artifacts, textile and basketry weaving, grinding stones, and writing. All of these are earmarks of civilization, once thought impossible before 10,000 years ago.”

This latter approach would leave historians faced with having to explain who on earth these people could have been, how they evolved or if they evolved at all, where they came from, and why they developed all these striking earmarks of civilization when the majority of the ancient world’s population of hunter-gatherers never even came close to establishing civilizations until quite recently? Even then, they did so only because of the influence of already-established civilizations—why not as a process of natural evolution?

In the last ten years, an extraordinary amount of evidence has surfaced that points to cultural diffusion having been practiced in remote times, helping to introduce new foods between civilizations throughout the world. If a group of people specialized in a certain food, herb, or spice, it was often traded or sold to another culture, sometimes thousands of miles away, thus increasing the range and variety of available foods for these people. Archeological discoveries of ancient stone statues and carvings thousands of years old revealing many different racial types have been found in areas once thought to be isolated from the outside world until more recent times. Myths and legends exist in almost all traditional cultures regarding the origin of various foods brought from afar in times long past. How long this has been going on no one knows for sure, but it has been going on longer than the 10,000 years of our accepted history of civilized agricultural peoples.

In just the last five years, we have been forced to reevaluate our beliefs about the simple-minded primitive Neanderthals and other early hominids. Although there are no indications of large civilizations and agricultural practice among them, they apparently were not the stereotypical club-bearing cavemen portrayed in our history books. New findings of Neanderthal and Cro-Magnon man reveal advancement well beyond what was originally thought. Both had larger brains than Homo sapiens and equal time to evolve, yet both died off and neither ever reached the level of sophistication of modern Homo sapiens. The overall pattern revealed by current discoveries shows civilized human activities extending continually further back into prehistory. Yet with some exceptions, most of these discoveries are still somehow “managed” by keeping them well within the confines of the cultural evolution paradigm.

An Unfinished Puzzle

Agricultural history has been based largely on the study of bones that remain in strata for thousands of years without breaking down. It is perfectly natural to assume, based on a scanty fossil record, that our Paleo ancestors consumed large quantities of meat along with the gathering of wild plants. After all, that is what we find with some modern hunter-gatherers.

Because there is little evidence in the way of preserved plant matter dating from earlier than 12,000 years ago, it is often assumed that before that time, all plants consumed by our forbearers were wild roots, leaves, fruits and wild grasses. After all, the assumption goes, human beings had not yet evolved to the point of agricultural necessity, having existed up to that point in varying states of primitiveness.

The human affair with agriculture and the currently accepted theory for its origin can be likened to a large puzzle: a few of the pieces fit nicely into place—and all others are forced into the remaining empty spots, whether or not they actually fit. The puzzle is far from complete. At this juncture, there is no proof that all humans on planet earth were primitive hunter-gatherers-scavengers before 10,000 or even 20,000 years ago, and there is plenty of evidence that evolved agricultural peoples existed long before 10,000 years ago.

Today we as “civilized” societies represent the majority of the earth’s people, with a minority consisting of nomadic hunter-gatherers still existing in the more remote parts of the world. It is plausible that at different times in pre-history, this trend may have been reversed, with civilized agricultural peoples being the remote minority.

The idea that our present civilization represents the ultimate in evolutionary achievement is a cultural bias that limits our future and ignores some of the evidence from prehistory. Perhaps the rise and fall of great civilizations is an event pattern that has occurred numerous times in prehistory. Before dismissing this idea as mere speculation, consider how often the accepted theories of human and cultural evolution have had to be updated to fit ongoing discoveries. We must remember to consider all the evidence and keep an open mind when confronted with new finds.

Some prominent historians believe that as hunter-gatherers, we had plenty of free time to do whatever we wanted, and that it was the heavy toil of agriculture that began our downfall. Some of these same historians also believe that agriculture brought with it all sorts of problems and had an overall corrupting effect on the happy lives of the formerly free-living primitives. But such beliefs are partially based on the observation of modern agricultural practices—not of the biodiverse methods practiced by many of the ancients. Allowing for the results of sudden earth changes through catastrophic events, agriculture was not so much a cause of peoples’ demise as some would like to believe.

When we listen to the ancient sagas and allow their accounts to help provide a coherent context for our scientific strata evidence, we find an increasingly persuasive picture in which ancient civilizations equal to and in some ways more advanced than ours have played a very real part. The constituents of advanced civilizations quite possibly shared this planet with a variety of primates and primitive humans for many tens of thousands of years in cycles of the receding and the re-emergence of agriculture.

References

In Search of the Cradle of Civilization, Feurstein, Kak and Frawley

The Giza Power Plant, Christopher Dunn

Last Hunters—First Farmers, Ed. T. Douglas Price and Anne Birgitte Gebauer

Ancient Man, William R. Corliss

Earth’s Shifting Crust, Charles Hapgood

The Shining Ones, Christian and Barbara Joy O’Brien

Underworld, Graham Hancock

Neanderthals, Bandits and Farmers, Colin Tudge

The Lost Civilizations of the Stone Age, Richard Rudgley