Saturday, February 26, 2011

Prostate Cancer Risks

Lets look at the Risk Factors for Prostate Cancer and dissect them for what they are worth. It would appear that there are some with proven as causalities and then other that do not quite carry weight.

Male Genitourinary Tract (Front view)

Male Genitourinary Tract (Side view)


It is well known in cancer literature that advancing age leads to rising incidence of cancer. This is not organ specific, meaning all organs; breast, prostate, colon etc. are subject to increasing risk of developing cancer. The reasons suspected for this include dysregulation of the genetic system, accumulation of various toxins over time, oxidative stress on the DNA etc.

Some sobering issues have emerged. A new diagnosis of prostate cancer is made every 2-½ minutes and a prostate cancer related death occurs every 16-½ minutes. Prostate cancer affects 1 in 6 males in the US. This is age adjusted as follows:

=<40 years
1 in 9422
40 to 59 years
1 in 41
60 to 69 years
1 in 16
70 to 79 years
1 in 8

Overall Risk is 1 in 6

More than 65% of all prostate cancers are diagnosed in men over the age of 65.

Ethnicity: African American males.

African American men have a 19% risk— nearly one in five — will be diagnosed with prostate cancer. This is a 60% higher incidence than in white American male population. The risk in African American males continues to increase with more immediate members having prostate cancer:

 It is 83% with two immediate members and an astronomical 97% with three immediate members of the family having prostate cancer. For every 100,000 African American males 181 will have prostate cancer. Risks are attributed to genetics and dietary factors mostly.


Chromosome 1 has a locus HPC-1 (region at 1q24-25) gene that is suspected to play a role in the African American community, especially in early onset of the disease. A few families have demonstrated such a link. 1 in 500 individuals carry the mutated gene. Another gene implicated in familial risk is the HPC-2 gene mutation (1q42.2-43) The genetic risks account for 10% of all prostate cancers.

For those desirous to see the DNA gene sequence it is below:


Other prostate cancer risk related genes that have been mapped to two other parts of chromosome 1, as well as to chromosomes 17, 20 and X. Additionally there is some suspicion that the CAG repeat length in exons coding for the androgen receptors might play a role also.  Risk is calculated with this formula:


Charred meats, high ingestion of dietary fats and dairy products are implicated with prostate cancer risk. Included in that risk is the low levels of selenium. Dietary agents showing some protection include; green tea, isoflavinoids and soy.  Lycopene found in tomatoes does not seem to have a previously thought beneficial effect based on current data. The recommendations are to increase the fruits and vegetables in the daily diet and reduce animal fat and dairy consumption.


Agent Orange

Men exposed to agent-orange exposure. There are several epidemiological studies that hint at the relationship between Agent Orange (Dioxin) (Defoliant used in Vietnam) and Prostate Cancer. The studies imply but cannot confirm a direct relationship since the studies ask question of possible exposure and therefore it is related to memory recall.
Most studies of Vietnam veterans have not found an excess risk of prostate cancer, but results from a few studies have suggested a possible link. However the Institute of Medicine has determined there is sufficient evidence to correlate between Vietnam Veterans with exposure to Agent Orange and the high risk of Prostate cancer to cover medical care.

Spraying in Vietnam with Agent Orange


There is insufficient evidence to link alcohol with prostate cancer at this time.
There is insufficient evidence to link smoking as a cause of prostate cancer, but there enough proof that people with prostate cancer have a higher likelihood of death.


Laboratory data implicate cadmium as a prostate carcinogen. Epidemiological studies do not convincingly implicate cadmium as a cause of prostate cancer. More studies are needed to evaluate this compound. Needless to say that exposure should be avoided.


Tire Plant Workers and Fire fighters.

There is insufficient evidence to implicate prostate cancer with this vocation. Based on the known studies in English literature there is some causality with respiratory tract cancers (Lung) and bladder cancer.

Tire factory


Farmers are exposed to multiplicity of various chemicals, such as pesticides, herbicides, fertilizers, solvents, engine exhaust gases and organic dust, and biological agents such as zoonotic viruses, bacteria and fungi. The slightly higher risk of prostate cancer is seen in some studies. They are too equivocal and non-specific as to correlation and causality. Although some of the toxins in the herbicides have been implicated in carcinogenesis, the current epidemiological evidence does not support this hypothesis.



There is insufficient evidence that painters have a higher risk of prostate cancer. Painters do have a slightly higher risk of developing bladder cancer but no positive association has been implicated with prostate cancer.

Honore Daumier (painter)

The only three hypotheses that have been shown as a correlation and causality are; Age, African American ethnicity and genetics. The fourth possible risk-provoking factor is the dietary intake of fats, dairy products and meats, More and more data seems to support those three. The remainder of the list touted, has little or no evidence of support.

Author recommends that all (readers) individuals who may suspect any such risks in their own lives to consult their personal physician for further clarity and preventative measures.


Carpten et. al; Germline mutations in the ribonuclease L gene in families showing linkage with HPC1, J.. Nature Genetics 30, 181 - 184 (2002)

American Cancer Society.: Cancer Facts and Figures 2010. Atlanta, Ga: American Cancer Society, 2010.

Miller BA, Kolonel LN, Bernstein L, et al., eds.: Racial/Ethnic Patterns of Cancer in the United States 1988-1992. Bethesda, Md: National Cancer Institute, 1996. NIH Pub. No. 96-4104.

Ruijter E, van de Kaa C, Miller G, et al.: Molecular genetics and epidemiology of prostate carcinoma. Endocr Rev 20 (1): 22-45, 1999.

Isaacs SD, Kiemeney LA, Baffoe-Bonnie A, et al.: Risk of cancer in relatives of prostate cancer probands. J Natl Cancer Inst 87 (13): 991-6, 1995.

Ma et al. A systematic review of the effect of diet in prostate cancer prevention and treatment. Journal of Human Nutrition and Dietetics, 2009; 22 (3): 187

Grönberg H, Bergh A, Damber JE, et al.: Cancer risk in families with hereditary prostate carcinoma. Cancer 89 (6): 1315-21, 2000

Verhage BA, Baffoe-Bonnie AB, Baglietto L, et al.: Autosomal dominant inheritance of prostate cancer: a confirmatory study. Urology 57 (1): 97-101, 2001.

Kolonel LN: Fat, meat, and prostate cancer. Epidemiol Rev 23 (1): 72-81, 2001

Sahmoun AE, Case LD, Jackson SA, Schwartz GG;Cadmium and prostate cancer: a critical epidemiologic analysis. Cancer Invest. 2005;23(3):256-63. Department of Internal Medicine, University of North Dakota School of Medicine, Fargo, North Dakota, USA

Vilhjalmur Rafnsson Occup Environ Med 2007;64:143 doi:10.1136/oem.2006.030932
Commentary:Farming and prostate cancer

Marie-Elise Parent1 and Jack Siemiatycki, Occupation and Prostate Cancer; Epidemiol Rev Vol. 23, No. 1, 2001

Acquavella JF. Farming and prostate cancer. Epidemiology

Tuesday, February 22, 2011

Prostate Cancer and PSA

With more than 217,730 men estimated to be diagnosed with Prostate Cancer in 2010 and almost 32,050 deaths from the disease, this disease warrants attention.

So I decided to share with you certain facts related to Prostate cancer so as to try to dispel myths that are propagated in magazines, books and online media.

Lets look at the Prostate gland itself first.

Prostate gland resides at the base of the penile shaft. It surrounds the urethra and has a size of a plum. The cells in the gland secrete PSA an enzyme used to de-clot the semen to allow the sperms that are produced in the testicles, to flow. The prostatic fluid is rich in citrate and zinc has an alkaline pH to counter the germ-protective acidity in the female vaginal fluid. Simple enough, right?

Let us discuss PSA, which has notoriously been the subject of derision and confusion. As mentioned PSA is an enzyme it is maximally present in the ejaculated seminal fluid. Some, very little, of it gets into the blood stream even in people with healthy normal prostate glands and that is registered in the laboratory tests.

As the prostate gland enlarges, as happens in benign prostatic hypertrophy or (BPH), the value of PSA rises many-fold. So that brings up the issue of its (PPV) positive predictive value of this test in prostate cancer.

The PPV, not to get too technical is the ability of a test to predict the presence of disease, in this case prostate cancer.

The lab test value considered as the cut-off for such prediction is placed at 4 ng/ml. The PPV of just PSA alone is 23%. This means that 4 out of 5 people with this lab test will NOT have prostate cancer.

Based on some other studies, if the PSA is then complemented with a digital rectal examination (DRE) by a urologist (physician) the value of the combined DRE and PSA rises to 36-47% or 2 of the 3 people with the combined test will NOT have prostate cancer.

Now if we are to add a Transurethral Ultrasound or (TRUS) the PPV becomes 53% or 1 of 2 people with all these tests indicating suspicion will yield a diagnosis of prostate cancer.

Individuals at risk for Prostate Cancer:

1. African-American men, who, are also likely to develop cancer at every age.
2. Men who are older than 60
3. Men who have a father or brother with prostate cancer
4. Other people at risk include:
5. Men exposed to agent-orange exposure
6. Men who abuse alcohol
7. Farmers
8. Men who eat a diet high in fact, especially animal fat
9. Tire plant workers
10. Painters
11. Men who have been exposed to cadmium

So do all individuals need all three? The answer is not so, unless you are at a high risk to begin with that include African American heritage, a strong family history or have a known specific mutation of the BRCA1 or 2 tumor suppressor gene in addition to those features mentioned above.  Also if PSA screening test has revealed a value higher then 4 ng/ml then these additional tests may be needed prior to an interventional prostate biopsy.

Also of note and of equal importance is that PPV is lower in the 5th decade as compared to the 6th and the 7th decades. In other words the older the person, the presence of an abnormal PSA level leads to a higher probability of prostate cancer.

Another beneficial test for further defining the value of an abnormal PSA is the Free-PSA. This test is used in individuals with PSA value between 4-10 ng/ml. If the result shows higher level of free PSA in percentage of total the better the chance that the PSA value is NOT indicative of cancer.

One last note on PSA is some belief that PSA itself may have some anticancer properties by inducing apoptosis of the pre-malignant cells. This data has not been verified.

Again as always it is important for all individuals to confer and seek answers from their treating physician to remain fully informed and objective in the decision making process.


1. American Cancer Society. Cancer Facts & Figures 2010. Atlanta, Ga: American Cancer Society; 2010

2. Use of prostate specific antigen (PSA) and transrectal ultrasound (TRUS) in the diagnosis of prostate cancer--a local experience.Tan HHGan ERekhraj ICheng CLi MKThng PTan IKYo SLPoh WTFoo KT.Department of Urology, Singapore General Hospital, Singapore Ann Acad Med Singapore. 1995 Jul;24(4):550-6.


Monday, February 21, 2011

Logic of Failure and Failure of Logic: Mathematical Models

If the butterfly flaps its wings… so goes the chaos theory of dynamical systems where the sensitive and not extreme dependence on the initial condition determines the disorder of the system as a whole.

So one might then consider that living in an extremely complex world that is governed by multivariate conditions, transposing each condition onto  a theoretically based model would requires zillion terabytes to compute - which we don’t have yet, creates a problem for such enterprises. So herein lies the premise of this undertaking that only the brilliant mind of Edward Lorenz could have named the “Butterfly Effect.”

                                           Edward Lorenz's Strange Attractor

Each piece of the whole exerts influence on the whole in ways that we cannot completely comprehend. Our understanding continues to evolve and the Eureka moments, encompassing as they may be, are also expositions of the limitations of the dynamical system that we do not fully comprehend. Nothing is in isolation and nothing changes without some effect from something else. For instance the human population explosion over the past century is associated with consequences, such as pressures for survival with corresponding extinction of some species as well as artificial preservation of others for our own needs (horses and dogs for instance). So to say we live in a dynamic system is an understatement of our own existence. It is a dynamic system that is constantly in flux. A plastic mold constantly being remolded.

Progress for humanity is littered with the refuse of failure. It seems to mimic life where 1.5 billion species have gone extinct based on fossil data. Our failures maybe not as dramatic as that but they are indicative of our limits.

Our failure is based on limits of understanding, the initial and subsequent conditions that are the prerequisite in a dynamic system and the perceived outcome. There lies the tail of this tale.

Mathematical Modeling:

To understand nature humans have used the art of reductio ad absurdum. To reduce something to its most basic elements so that we can see how that something ticks. This philosophy has given birth to multiple industries that have used nature as a template. Aerospace, Medical, Engineering and Nuclear to name a few. Having mastered the art of seeing things flayed open for all pieces to be visible has also inculcated in us the art of modeling that would help newer design predictable function, performance and even the future.

Mathematical Modeling is a full-blown tool used in all sciences and disciplines. Do we understand it is the question?

Ah! there is the rub. Mathematical Modeling is a philosophical construct that has with time become the process du jour of scientific thought. It has been used successfully many times and many times it has come short. As much as the former is true, it is the latter that gives us pause. From our failures comes the light of knowledge. This then is the fruit derived from dashed hopes that leads to advancing the language of mathematical perfection. Something we seek but may never realize.

The fundamental question is whether it is possible to predict response of some process after various numerical inputs have been used to determine the outcome with a high degree of reliability.

The first order here is the initial thought also called “the concept.” The concept is based on several inputs that include materials, mechanics, interacting parts, procedures etc. This minimalistic list is left to the “judgment and experience of the mathematical analysts.” Any increase in the number of variables such as interacting and changing external dynamic factors requires a complex series of undertakings that involve probabilities and mathematics steps in.

So mathematical models vary by what the desired study is, variables involved, materials  used, and the intensity of the computational analysis. For instance a model based on the aeronautical dynamics of a composite material will require information on the deformation patterns, tensile strength and elasticity of the material under study.

Carbon fiber Composite material

All such comparisons would be based on aluminum (the gold standard) that has been in use for decades. Additional information regarding its tolerance to fatigue (recall the Aloha Airlines disaster) and its resistance to fire (Use of Nomex fibers in composites and as this relates to the strength or weakness of the material) will be brought to bear on the new concept to be exploited.

Aloha Airlines with decompression due to metal fatigue

After the initial conceptualization process has been completed, the concept then has to be validated by the rigor of predicting component/subcomponent and/or system failures. This is done via simulated experiments. At this point all subcomponents, components, subassemblies and the total assembly of the product under consideration are forced through known “stressors.” A successful prediction based on inducing failures under different external variables gives validation to the concept. The errors that creep in could relate to the initial conceptualized model, numerical approximation (recall Edward Lorenz’s error), the simulated experiment, or the statistical variables employed. Since most of the modeling is based on predictable axioms, they are axioms nevertheless and as such suffer the wrath of nature’s whims.

Rotors on Blackhawk Helicopter

Remembering that a mathematical model is an order of reference within the hierarchy of a larger model, must give the analyst pause and concern at every level, for instance an analyst performing a validation experiment of a rigid machine exposed to large degrees of vibration has to have “Harmonic analysis” in mind as one of the parameters. An example here could be the rotor blade subjected to the supersonic speeds has to undergo deformation, linear and rotational stresses and place significant centrifugal forces at the rotor disc level. Fatigue damage in homogenous material is especially dangerous because they are unpredictable, giving no prior notification of the imminent failure, they occur suddenly
and show no exterior plastic deformations. Advanced composite materials exhibit gradual damage accumulation to failure. Typically, matrix cracking, delamination occur early in the rotor life, while fiber-fracture and fiber-matrix debonds initiate during the beginning of the life and accumulate rapidly towards the end, leading to the final failure.  Thus actual experimentation shows that composite rotor blades become better predictors of failure then homogenous ones. This was borne out of actual experimentation. The mathematical modeling did not reveal the anomalous behavior of different material used in the simulated outcome. In a calm wind environment this would have different dynamics compared in conditions of strong wind shear where performance, function and integrity may be challenged. After several years of research and development Boeing has designed the 787 aircraft with 50% composite material for its light weight to improve efficiency and overall fuselage strength.
Boeing 787

A mathematical model estimating the population by using the “Birth Rate” and “Death rate” does not really tell us the real story. Changing the parameter to “Per Capita Birth Rate,” and “Per Capita Death Rate” signifies the population being tested thus it gives a better record. Using the right parameter in the methodology makes a difference in the computed outcome and desired result.
Robert Thomas Malthus

History teaches us that the Malthusian Model of population expansion and resource comparison was woefully inadequate since there were no inputs for catastrophes. It was a steady-state model, which as we know is not how humans and nature behave and neither does it incorporate externalities like the Influenza epidemic of 1918 that decimated a quarter of the world population. His modeling was simplistic in the steady-state growth of the population outstripping the food supply proclaiming the incipient crises.
Adding a single variable can yield an unexpected result. View the Spring loaded pendulum or a double pendulum effects on the chaotic movement of the pendulum itself as calculated in the Wolfram Mathematica Program.

Spring loaded Pendulum

  Double Pendulum

So what is this mathematical modeling genie and how do we use it and benefit from it?

Mathematical Modeling streamlines some processes:

Conceptualizes a thought.
Thoughts are placed through a simulated rigor.
The thought is subjected to various externalities.
Modifications of the model can test different scenarios on the same hypothesis.

The problems that are inherent in Mathematical Modeling include:

The concept may not be proven
The concept is insensitive to the variables
It is too simple in its characteristics.
It is too complex requiring unavailable computing power.
The model cannot lend itself to a mathematical solution.                                
Uninspiring as it may be there are small order magnitude variables that are rarely incorporated in the validation process and these include:
Symmetry or x=y then y=x and
Transivity or if x=y and y=z then x=z
Thus only using Reflexivity of x=x limits the useful load on the experiment.

Validation Processes include the content, context and criterion: Content refers to the materials and information available, Context is in reference to the externalities and criterion is based on specific environment use. All three validations have to be undertaken in the simulated experimental model to achieve a high degree of confidence of its reliability. After all the predictability of the validation process is based on repeatability. The Monte Carlo model of repeat computer simulations to draw inference of the viability, repeatability and validation of the outcome of the experiment is another mathematical model used in various scenarios.

Mathematical Model foundationally is a philosophical and logical construct. Although philosophy is a nuance of an understanding and logic is based on the variable parametric value, both constructs are highly subject to perceptions, understanding, resource limits and bias. Thus the Logic of Failure can be a failure of logic.

Logic of Failure:

A model that did not consider the strange attractors:

Challenger Liftoff

The Challenger Disaster: On January 28th in 1986 after a cold spell in Florida the Challenger lifted off to a highly publicized event.

Unbeknownst to the occupants in the Space Shuttle Cockpit was that the O-Rings or the “Toric joints” had become brittle due to exposure below the glass transition temperatures of 40 degrees Fahrenheit overnight. They failed 73 seconds into flight. The failure of the O-Ring resulted in high temperature gasses expulsion onto the external fuel tank leading to a catastrophic “System Anomaly!”
System Anomaly

The Risk analysis done prior to the Challenger flight and based on previous such space flight was calculated at 1 in 100,000. But later analysis showed it was realistically at 1 in 100. Given that the Challenger mission was the tenth mission and no previous failure data was available to asses the real risk of failure which actually would have been 1 in 10 (Since this was the tenth such flight). The O-Ring issue had not been previously addressed but had been in consideration for the Launch parameters. These parameters were set at 50 degrees Fahrenheit.  The overnight lower temperatures however were not considered into the equation.
Richard Feynman (Nobel Laureate)

The noted Richard Feynman made the dramatic announcement of the etiology of the failure before a nation in mourning.
Xray of the O-Rings

H1N1 Epidemic: In the Spring of 2009 the CDC (Center for Disease Control made a bold and provocative declaration that the estimated numbers of the H1N1 cases in the country were “upwards of 100,000.
H1N1 virus

At that point and time only 7415 had been confirmed. These lofty projections were based on a mathematical modeling done by two supercomputers. Both computers fed similar data drew the same conclusion. The “Rubbish in rubbish out” motto is self-explanatory. The model was projected on the basis of an exponential rise in cases based on previously drawn comparators of the Influenza Epidemic of 1918. The World Health Organization predicted dire outcomes of a worldwide pandemic and issued orders for quarantine of suspected individuals. Fortunately that proved to be untrue but in the process the world panicked. Conjecture and axioms have a way of loading up and ganging on the unsuspecting scientists and lay people alike. Airline traffic declined hotels lost booking, conferences were cancelled with unanticipated huge economic losses. Those forced to travel wore unwieldy masks to protect themselves furthering fear. This was not a case of “red-face” or “black-eye.” It was a debacle promulgated by a scientific folly.
HIV viral load and the CD4 levels

HIV Progression: Two high profile science papers in the prestigious journal Nature were published in 1995 related to HIV and AIDS. During the media buzz of the coming Armageddon (HIV) stated large population decimation as a result of this infectious virus.  Human existence was at stake. The premise was to treat all patients infected with HIV even if they were asymptomatic. The two studies by Drs David Ho and George Shaw called the Ho/Shaw predictive model used the “viral load and the quantitative value of the CD4 T cells (Immune Cells) as their parameters for their therapeutic assertions. The rise of the viral load was one parameter and the fall of the CD4 cells was the other. There was a reciprocal relationship between the two and thus the model predicted that in order to prevent an asymptomatic carrier from full blown AIDS they should be “hit hard and hit early” with a protease inhibitor. There was no Control Group used for either of the studies done separately by the doctors. Large numbers of patients were treated using the model. Eventually data gleaned from these treatments showed that the initial drop in the viral load was associated with a rise in CD4 cells but over a prolonged period of close observation the reverse happened and the viral load increased. So there was no validation of the conceptual design that had not been rendered through a properly designed scientific study with a control group to determine real efficacy. The unintended consequence of this undertaking of treating asymptomatic patients was that it created in short order multiple mutations in the virus that became resistant to the protease inhibitor. Now those with the disease would not get help. After many years and unnecessary treatment this treatment method was abandoned.

The message in this modeling was based on only two parameters without a control group. A preexisting opinion (bias) became a forced finding of fact. The model generated with the two parameters, in isolation of the true knowledge of the various players of immunity, was at best self-delusional and at worse unnecessarily forced therapy upon countless asymptomatic patients with resultant viral mutation showing resistance to the drug in use.

The Failure of Logic:

A model that considered strange attractors:

Economic Black Swan Effect: One can trace the history of the 2007-2008 Market meltdown and the fall of Wall Street giants like Lehman Brothers,

Bear Sterns and

 Washington Mutual, to two Nobel prize winners Robert Merton and Myron Scholes with Fischer Black propounded the theory of the Black-Scholes Option-pricing model. The model assumed that the price of heavily traded assets follow a geometric Brownian motion with constant drift and limited volatility. In relation to stock option, the model incorporates the constant price variation of the stock, the time value of money, the option's strike price and the time to the option's expiry. The Black-Scholes option theory became the darling of Wall Street. It created derivatives and the CBOE (Chicago Board of Option Exchange) enjoyed a tremendous rise in contracts and therefore revenues. In 1973 the Exchange traded 911 contracts and in 2007 the volume of contracts reached one trillion.
Myron Scholes (Nobel Laureate)

Messers. Scholes and Merton started the now infamous LTCM (Long Term Capital Management) Hedge fund The fund was based on absolute-return trading strategies that included fixed income arbitrage using convergence trades exploiting asymmetries of the US, Japanese and European government bond prices, combined with high leverage. The latter strategy exposed the fund management to a 25 to 1 debt to equity ratio in its later stages. The meteoric monetary gains of 40% in 12 months using this exploitation lasted until the East Asian Financial Crises of 1997 followed by the Salomon Brothers withdrawal from arbitrage and the Russian Financial Crises in 1998 sealed the insolvency of the mathematically modeled LTCM Hedge Fund. Investors ran from the Japanese and European Bonds to the US Treasuries for safety and in so doing erased and reversed the asymmetries against the proposed models used by the LTCM managers leading to huge losses in less than 4 months.

Keeping things in context, the Black-Scholes model was a simplified model using laws of thermodynamics to equate to the changes in the price of shares. The gentle trending above and below the mean was the premise of the stock movement rather than the violent seesaw effect that happens in the real world. The concept of slow movements of share prices in short term trading strategies was given credence to within the model rather then the real world events that can whipsaw and rubberneck in a trending market. The computers with buying and selling software programs that can hit the market at record speed can create havoc with the share prices and are predetermined by programs created by the quantitative financial analysts or quants. The market is a living, breathing mechanism based on the desires expressed singly by individuals or in large packets by the computing devices and governed ultimately by humans. This was not parameter used in the Black-Scholes model.

In keeping with the economic modeling methods the quants then decided to use the derivatives market to develop products like the CDOs or Collateralized Debt Obligation notes, the CDSs or Credit Default Swaps bundled securities.
The rapid increase of CDOs

The concept was simple, take a large number of mortgages of different risks, package them together and securitize them. The large bundled mortgages would be rated by an agency that was paid by the CDO creators and everyone would be content; the investor, comforted by the rating, the loan originator by the diversity of the portfolio and the seller by the pricing power of the product. The mortgage payments were allocated to the triple “A” investors first and then the rest. The equity investor in cases of default was left holding the useless IOU. These large caches of securitized mortgages made the banks and lending agencies hustle more lending to individuals that they would not have touched in prior years due to financial risk, now were welcomed. These were also the people who could not afford to pay the mortgages or held any assets as collateral to the loans.

Housing prices

In their paper published in August 2010 The Failure of Models that Predict Failure: Distance, Incentives and Defaults, Authors: Uday Rajan, Amit Seru, Vikrant Vig, state, 

“As the level of securitization increases, lenders have an incentive to originate loans that rate high based on characteristics that are reported to investors, even if other unreported variables imply a lower borrower quality…To illustrate this effect, we show that a statistical default model estimated in a low securitization period breaks down in a high securitization period in a systematic manner: it under-predicts defaults among borrowers for whom soft information is more valuable. Regulations that rely on such models to assess default risk may therefore be undermined by the actions of market participants” 

Bank Leverages

The house of cards fell, swallowed a country and plunged the world into a fiscal abyss.

The banks and Mortgage lending agencies, looking for a quick profit and unaware individuals looking for a picket fenced home, all participated in the “perfect storm.” The credit default market took advantage of desire and at one point was estimated at $55 trillion!

Using mathematical modeling has its benefits of dealing with an idea, however in a complex world where the variables are in the millions if not billions the chance of creating a model to satisfy all potential adversities is zero. In economics especially, real life and mathematic modeling are decoupled. Benoit Mandelbrot calculated that IF the Dow Jones Average followed a normal distribution it would have moved 3.4% on 58 days between 1916 and 2003. In fact it moved 1001 times. He also calculated that should have moved 4.5% on six days however in reality it moved 366 times. It should have moved 7% once in 300,000 years. The reality is in the 20th century it moved 48 times. In all the market had 25-standard-deviations several days in a row. The market sometimes moves in the extreme tail of its normal distribution curve. A perfect example of the tail wagging the dog!

So are Mathematical Models cursed to eventually fail? The simple answer is no. They work well in design details of the components, assemblage and when environs are taken into account. However when large-scale “real world” scenarios are attempted without relevant information, the asymmetries that remain unaccounted for in the modeling concept cannot be validated until the “real world” tests the product in its own environment.

Saturday, February 19, 2011

Human Diversity (in the Genes)

Staring at my mailbox, I found enormous amounts of spam mail. Most if not all was nonsense. There was an occasional one that piqued my curiosity, but not enough to open them for fear of the worms and viruses. It got me thinking about the genetic system that we have within us that drives us from birth to death. Similar in scope and behavior as it relates to information gathering and dissemination.

The inner universe that controls the function of the visible one is a basket of deceit, overthrow, control, incarceration, submission and acceptance. The Human Genome Project determined that human DNA, in its Watson-Crick double helical mode, breathes life through 23 (22 + sex pairs) pairs of chromosomes. Although the break up of the DNA into chromatin material and further subdivide into chromosomes is for the purpose of division so that the two ends of the “daughter-cells” can pull the equal numbers via tubules on each side in a dividing non-germinal cell. Once separated and enclosed in the newly divided cell the DNA resumes its original shape.

The progeny of two (such) sister cells are not alike with respect to the types of gene alteration that will occur. Differential mitoses also produce the alterations that allow particular genes to be reactive. Other genes, although present, may remain inactive. This inactivity or suppression is considered to occur because the genes are ‘covered' by other non-genic chromatin materials. Gene activity may be possible only when a physical change in this covering material allows the reactive components of the gene to be ‘exposed' and thus capable of functioning." -- Barbara McClintock

The amazing thing is that it is the DNA and its programming that forms, modulates, modifies and renders senescent the vessel (the body) it is carried in by a set of 25,000-30,000 genes (Based on the Sanger Institute’s human genome information in the VEGA database, these remain as estimates). Genome is not a stationary entity, but rather it is subject to alteration and rearrangement.  Even more fascinating is that the DNA at its basic level is composed of four, count four nucleic acids; (A) Adenine, (T) Thymine, (C) Cytosine and (G) Guanine. These four are locked in predetermined set of twos in a permanent state of dance. The A always mates with T and G always with C.  So from this simplicity arises the most complex of life forms that exist on our planet.

Manifest within the genes is the code that makes us and every other living breathing being around us. These genes code for the color of the flowers, the blade of grass, and the fur on the poodle that sits at your feet and its eyes that look at you longingly and the wet nose that sniffs for discovery. The control and color and grandeur of the world’s inhabitants are predicated by the genetic expressions. 

So the question can be asked what makes a small number of genes create such diversity? What makes the subtle or dramatic changes just looking at each other’s faces, to recognize the difference amongst us? Therein lies the beauty and majesty of the genetic workshop.

At it’s most basic Mendelian level (Gregory Mendel - July 20, 1822 – January 6, 1884) the genes from two parents are paired together each carrying its own sets of programmable codes. When the merger takes place the dominant genes will overshadow the recessive ones and express their abilities into the character. That was the widespread thought in the past. The present has unfolded the napkin of the genetic nuance and there is a whole lot of mischief going on within the folds.  

In 1952 a geneticist named Barbara McClintock (awarded the Nobel Prize in 1983) discovered that there were genetic transpositions occurring through the genomic material of maize. She termed these transgressions as “Jumping Genes.” The technical name was “Transposons.”  It turns out that the genetic material does prefer a sense of variety. As the saying goes “Variety is the spice of life,” genes take that to heart. Within the genomic system is a silent but busy mechanism that is in full perennial blossom.

Transposons are genetic materials that move or transpose from one part of the DNA to another. They comprise of at least two main groups, the DNA Transposons and the Retrotransposons (RNA Transposons)
  1. DNA Transposons
  2. LTR or Long Tandem Repeats
  3. LINEs or Long Interspersed Nuclear elements
  4. SINEs or Short Interspersed Nuclear Elements.

The DNA Transposons encode the protein transposase, which they require for insertion and excision of the DNA material/fragment that are clipped from within the linear DNA material of the entire genome and inserted into another part of the genome. It is a “cut-and-paste” biological technology at work. The question is why? Why does the DNA partake in this promiscuity? The answer for this and all the other transpositional mechanistic fragments is for “Diversity.” The “jump” of a gene from one part to another favors diversity and as Charles Darwin maintained evolution favors “survival of the fittest.” There is an evolutionary programming within the DNA to maintain itself through adversity. It therefore tries to continue the evolutionary experiment to make itself robust against extinction based on the “Soma Theory.” (The Soma or body has to be maintained for propagation of the genome and once replicated, the Some or body can be disposed of).

So what happens with each transpositioned genetic material? Interestingly when a material transposes itself to another site it would by necessity, depending on its new location -especially if it happens to be near a functional gene, cause it to either over-function or under-function or just stop functioning.

The fact that transposable elements do not always excise perfectly and can take genomic sequences along for the ride has also resulted in a phenomenon scientists call exon  shuffling. Exon shuffling results in the juxtaposition of two previously unrelated exons, usually by transposase, thereby potentially creating novel gene products (Moran et al., 1999).

Lets look at few examples of inter-genetic mischief:

                   1. Deletion, 2. Duplication and 3. Translocation (Above)

  1. 9:22 (BCR or Break Point Cluster region attaches to the Abelson gene) Translocation leads to Chronic Myeloid Leukemia.
  2. 11:14 Translocation leads to Malignant Lymphoma.
  3. 15:17 Translocation leads to Acute Promyelocytic Leukemia.
  4. 9p21 Polymorphism leads to heart disease.
  5. Insertions of L1 into the factor VIII gene caused hemophilia (Kazazian et al., 1988).
  6. Researchers found L1 in the APC genes in colon cancer cells but not in the APC genes in healthy cells in the same individuals. This confirms that L1 transposes in somatic cells in mammals, and that this element might play a causal role in disease development (Miki et al., 1992).
              1. Gene Translocation and 2. Gene Transfer (Right)

Transposons have been labeled as "spam" by some and rightly so. Most of the time spam is relegated to spam archives and does not interfere with workings of the computer, but some times it can insert a "computer virus" or "worm" in it's hidden code, "crash" the computer system and infrequently it may provide valid information. The Transposons work mostly as promoters and when this promoter effect gets juxtaposed to a functional gene that codes for proliferation of a particular cell type, malignancy ensues. Fortunately just like the Antiviral Software that exists for computer also exists in the human body. If the Transposons cause an adverse event it can be deselected or rendered inactive. Question is how?

The genetic "antiviral software" is called the siRNA (small interfering RNA) or RNAi (RNA interference). These RNA fragments are snippets of the messenger RNA that derives its information from the DNA. These snippets can lay themselves on  
the Transposons and neutralize them by a methylation process or simply by suppressing the gene function. It is the body's own defense mechanism to fight internal revolt. Even though the Transposon material are perfectly intact and capable of moving, yet they are kept inactive by epigenetic defense mechanisms such as DNA methylation, chromatin remodeling, and miRNAs.

                                                                    RNA and DNA

Consider the issue of a quiescent gene used specifically for proliferation of cells in organ (liver, lung, kidney etc.) growth during in-utero phase, which is then mutated during adult life as for instance the gene that codes for pluripotential cells that liberate CEA (Carcinoma Embryonic Antigen) or AFP (Alpha Feto Protein)  between the 12-28 weeks of life. That mutation in later life would lead to unrestricted cell growth and cancer formation with liberation of these proteins CEA (found in gastrointestinal malignancies) and AFP (hepatocellular carcinoma or liver cancer). These serum proteins become guides for disease monitoring and response to therapy by an oncologist. The switch that gets turned on later in life was not supposed to happen, but it happens when the specific gene goes awry. The gene mutation may occur due to many causes both environmental genetic stressors to these mischievous Transposons.

The RNA Transposons work differently. They are also known as retrotransposons. In other words, retrotransposons do not encode transposase; rather, they produce RNA transcripts and then rely upon reverse transcriptase enzymes to reverse transcribe the RNA sequences back into DNA, which is then inserted into the target site. (They therefore have codes for proteins that cleave specific portions of the RNA, and then via other enzymatic protein that they have co-opted or part of their own framework, entice the reverse transcriptase to get attached to the DNA. So in essence (the RNA) the messenger becomes (the DNA) the message creator).

The two main types are the LINEs and SINEs.
SINEs constitute 10.4% of the human genome. LINEs constitute 17% of the genome.  Of the 17% of the LINE transposons only a 100 remain active in the human genome. 47% are LTRs (Long Terminal Repeats) and the DNA Transposons “cut-and-paste” only constitute 2-4% of the genome. 50% of the entire human genome is composed of these transposons.

Therefore not surprisingly this DNA/RNA Transposons behavior has been in vogue for millennia. Similar Transposons copies have been found in various primates such as the Alu (also present in humans) and in a large segment of common Transposons in animals that includes the whales, which helps better understand the mammalian evolutionary branch in the Tree of life.

Large segments of the human DNA contain various genetic materials from viruses that have inserted their own genetic materials. Most of this lies dormant as "junk DNA." some have gained respect as " proto-oncogenes" that can via various mechanisms convert to "oncogenes" and start the cancer process. These genes include the c-myc gene, src gene etc. Other primates and animals, indicating non-species specific manifestation, also share these viral genetic insertions.

Another interesting fact is our body's ability to generate a large diversity of immunological antibodies against various viral, bacterial, fungal agents. This diversity is believed to arise as a result of the Rag protein that smuggled it's way onto being reverse-coded onto our DNA millions of years ago. It survives and is functionally propagated because of its utility and benefit of its host! The robust human immune surveillance with its enormous diversity to generate disease specific antibodies is a magnificent example of the benefit of evolutionary “Jumps,”


1. McClintock, B. (June 1950). "The origin and behavior of mutable loci in maize". Proc Natl Acad Sci U S A. 36 (6): 344–55

2. Wei-Jen Chung,Katsutomo Okamura,Raquel Martin, Eric C. Lai (3 June 2008). "Endogenous RNA Interference Provides a Somatic Defense against Drosophila Transposons". Current Biology 18 (11): 795–802

3. Yang, N., & Kazazian, H. H. L1 retrotransposition is suppressed by endogenously encoded small interfering RNAs in human cultured cells. Nature Structural and Molecular Biology 13, 763–771 (2006

4. Kazazian, H. H. Mobile elements: Drivers of genome evolution. Science 303, 1626–1632 (2004) doi:10.1126/science.1089670

5. Kazazian, H. H., et al. Haemophilia A resulting from de novo insertion of L1 sequences represents a novel mechanism for mutation in man. Nature 332, 164–166 (1988)

6. Kazazian, H. H., & Moran, J. V. The impact of L1 retrotransposons on the human genome. Nature Genetics 19, 19–24 (1998)

7. Moran, J. V., et al. Exon shuffling by L1 retrotransposition. Science 283, 1530–1534 (1999)

8. Miki, Y., et al. Disruption of the APC gene by a retrotransposal insertion of L1 sequence in colon cancer. Cancer Research 52, 643–645 (1992)