Dr. Joseph Lorenzo

  • Content Count

  • Joined

  • Last visited

  • Days Won


Dr. Joseph Lorenzo last won the day on January 15 2016

Dr. Joseph Lorenzo had the most liked content!

Community Reputation

5 Neutral

About Dr. Joseph Lorenzo

  • Rank
    Advanced Member

Profile Information

  • Gender
    Not Telling

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. This installment of my blog is its final chapter. For almost six years I have had the privilege of being able to write about the world of bone and mineral research in these pages. I hope that you, the readers, have been happy with the results. It has been a great honor for me to write these blogs. I very much thank the ASBMR, its officers and its administrators for being so generous in allowing me essentially free reign to choose and expound about topics that I feel are important to ASBMR members. Since 2010, I have written about 75 blogs. When I first started, I tried to be somewhat chatty, probably because I was not quite sure what I was doing. However, as time progressed, I began to focus the blog on issues in our field that were important or had wide-ranging implications for scientific discovery. A number of blogs centered on scientific technology. When I began my scientific career (sometimes, it seems like when dinosaurs walked the earth), bone and mineral research was dominated by in vitro studies. However, we are in the middle of a revolution in the development of animal models for scientific research. Tools such as global knock out mice have progressed to targeted gene manipulation models, which delete or alter specific DNA sequences either from birth or after activation of recombinases post-natally in a limited number of tissues. These allow researchers to analyze the role that the expression or mutation of individual genes in specific cells have on bone development and bone function. Most recently, CRISPR/Cas9 technology (which is very likely a Nobel Prize winning discovery) has made the production of these models more efficient and less expensive. In human bone and mineral research genetic variations as a cause of human disease has emerged as the great frontier of scientific research. We have known for a long time through ethnic and family studies that skeletal diseases have major elements of their pathogenesis in multi-gene variabilities. We are now on the threshold of being able to identify those variabilities and, in turn, better define the origins of human skeletal disease. I wrote a number of blogs about this topic. High-speed genetic sequencing has decreased in price to the point where it is almost certain that the specific genetic variabilities that underlie human skeletal diseases will be identified in the near future. This will provide insights, which when explored in animal models, should identify new ways to treat human skeletal diseases. I also explored controversies in our field, including how much calcium and vitamin D are necessary for bone health, whether we are adequately measuring vitamin D levels in all ethnic groups, what are the dangers of administering vitamin D and calcium supplements and what are the adverse events associated with our therapies for bone disease. There were a number of blogs about funding, which is the mother’s milk of scientific research. Most recently, my last provided hope that at least in the US, funding at the Federal Government level is increasing. Perhaps the most difficult blogs to write were those that memorialized members who had a tremendous influence on our field and had passed away. These were people who I knew personally and admired. I hope that I was able to do justice to their memory. The end of this blog will give me more time to work on my own research, which, fortunately, is going well. I look forward to continuing to interact with the bone and mineral research community in a more conventional role as a scientist who is simply trying to understand the origins of skeletal disease and identify ways to treat it. Joe Lorenzo, Farmington, CT, USA
  2. Funding is the bedrock of all biomedical research. In the United States there are a number of agencies that support scientific biomedical research. However, far and away the largest single source is the National Institutes of Health (NIH). Since 2003 funding for the extramural program of the NIH has declined in inflation-adjusted dollars by 22% (see Figures 1 and 2, which were created by Federation of American Societies for Experimental Biology, FASEB). In turn, this decrease in inflation-adjusted support has decreased the number of research project grants that NIH can fund in any given year by over 3000 (roughly 10%) during this same period. Fortunately, it now appears that the trend of decreasing fiscal support by the U.S. Federal Government for biomedical research may be reversing. For fiscal year 2016 President Obama’s budget proposed a one billion dollar increase in NIH funding, which would be only the third time since 2003 that the annual NIH budget increased at a rate greater than inflation. Even more hopeful for researchers, changes to the Budget Control Act of 2011 (the so-called “Sequestration”) allowed more discretionary funding in the Federal budget. Recently, a group of 100 members of the U.S. House of Representatives signed a letter urging the House to increase NIH funding above the Presidents request for fiscal year 2016 by another 700 million dollars to 32 billon dollars per year (roughly a 6% increase over last year). This is also the level of NIH support that the U.S. Senate Appropriations Committee approved last August. The U.S. Congress has also become aware that the wide swings in fiscal funding for the NIH have had significant negative effects on the number of investigators who can pursue biomedical research. This is probably most important for early stage investigators who are the most dependent on the successfully funding of their research to advance their careers. In July the U.S. House approved the “21st Century Cures Act” with strong support from both major political parties. This bill would provide $8.75 billion in new funding for the NIH over five years ($1.75 billion in additional funding per year). However, this measure needs to pass the Senate where it may run into opposition because of limited ways to pay for the new NIH funding. The additional NIH funding would. be distributed in the following areas: biomedical research, cures development, an accelerating advancement program, high-risk high-payoff research, and special funding support for early career researchers. The U.S. House bill also sought to accelerate the process by which the U.S. Food and Drug Administration (FDA) approved new treatments for disease. This would include relying on results from biomarker studies rather than more definitive trial results. It is unknown whether changes to the FDA approval process will be included in any final legislation. All this news brings some hope to U.S. biomedical researchers that the decline in inflation-adjusted funding of the NIH that has occurred over the last 12 years may be reversing. However, FASEB estimates that even with an increase in NIH funding of 5 percent per year, which is approximately what the 21st Century Cures Act proposes, it would take 10 years to restore NIH funding to its fiscal year 2003 inflation-adjusted level. Joe Lorenzo Farmington, CT, USA.
  3. Osteoporosis is a disease with a strong genetic etiology (1). However, exactly which or how many genes are involved in the pathogenesis of this condition is unknown. Numerous studies have tried to link specific regions of the genome to traits such as bone mineral density to understand the genetic variations that predispose individuals to osteoporosis. In the past these have either used a genome wide association study (GWAS) approach or a candidate gene approach. GWAS uses single nucleotide polymorphisms (SNPs) or variations in DNA, which occur naturally throughout the genome, to identify regions of chromosomes that predispose individuals to have a specific trait such as high or low bone mineral density. The candidate gene approach examines specific variations in the DNA sequence of known genes (including the regulatory, coding and non-coding regions) to identify variations that occur more frequently with a specific trait like bone mineral density. Both approaches have their strengths and weaknesses. GWAS can identify regions in the entire genome that “link” to specific traits but not specific sequences in DNA that are causative for the development of disease. Candidate gene studies can identify specific sequences in a gene that are involved in the development of a trait but are limited to analysis of the sequences of known genes. In contrast, whole genome sequencing uses the genetic information of the entire genome (approximately 3 billon base pairs from the DNA of the chromosomes and the mitochondria) to link specific sequence variations to human traits. The technology has evolved since 2000 when the Human Genome Project first published its rough draft at a cost of over 3 billion dollars. Currently, the cost of sequencing a human genome is approaching $1000 for a high fidelity run and it is likely that this price will drop further as the technology evolves. Reports that use whole genome sequencing to link human genetic sequence variations to bone density have now been published. The first was in 2013 and identified a mutation in the leucine-rich repeat-containing G-protein coupled receptor 4 (LGR4) gene that fully disrupts its function (2). This variation was identified in an Icelandic population and appears specific for this group since it was not detected in other European populations. More recently, a large international study (3) used the UK10K project to examine whole genome sequencing of 2,882 subjects as well as whole exome (the protein coding regions of genes) sequencing in 3,549 subjects and identified a non-coding region genetic variant in the vicinity of the engrailed homeobox-1 (EN1) gene, which associated with bone mass. This variant was relatively rare, with a minor allele frequency of 1.7% and is not represented in the current HapMap imputation panels, showing that there is still much to be learned about genetic variation in humans. Additional, albeit slightly less significant, variants around the EN1 gene were also found and these too are very rare (occurring in 1.6% to 5.8% of the population). Three of the four genetic variants associated with lumbar bone density while the fourth associated with femoral bone density. All these results demonstrate that bone mass is a complicated genetic phenomenon. Unlike the previous Icelandic study, EN1 variants were present and segregated similarly with bone mass in the general population of subjects of European descent. Demonstrating the power of this technology, this observed effect was highly significant and fourfold larger than that previously seen at any chromosomal site in a large GWAS analysis. In addition, one of the DNA sequence variants was associated with increased bone mass and a decreased fracture risk in a large cohort study. The authors also characterized a mouse model, which had a targeted deletion of En1. This animal had decreased bone mass with increased osteoclastic and osteoblastic activity. Additional studies found that En1 was expressed in osteoblast lineage cells but not in osteoclasts. Based on the results of all these studies, the authors concluded that En1 is an important mediator of skeletal biology. These reports demonstrate the potential power of whole genome sequencing to identify relatively rare variants in the genome that associates with skeletal disease. However, they also demonstrate the complexity of such studies, which require a large number of subjects, sophisticated statistical genomics and advanced genome sequencing techniques. Because of these limitations, progress in identifying the multi-gene variations that may underlie more common causes of osteoporosis are not yet within reach. However, one only has to examine the rapidity with which the technology of whole genome sequencing has advanced in the last 15 years to appreciate that these challenges are apt to be overcome. We are most likely on the verge of important new insights into the genetic etiology of this disease. Joe Lorenzo Farmington, CT, USA
  4. Diabetes is a common condition with significant morbidities (1). The World Health Organization estimates that the number of diabetics increased from 85 million in 1985 to 217 million in 2005 and may reach 366 million by 2030. It is a major cause of cardiovascular, renal, neurologic and ophthalmologic diseases. In the United States the cost of diabetic care is projected to reach almost 200 billion dollars by 2020. Type 2 diabetes, which has in the past was called non-insulin dependent or adult onset, is the most prevalent form, comprising at least 90% of the cases. Because the incidence of both type 2 diabetes and osteoporosis increases with age, it is not unusual for both conditions to occur in the same patient. A variety of medications have been approved to treat type 2 diabetes and these are often taken for prolonged periods. However, it is now recognized that some of these have detrimental effects on bone health. Meier et al recently published an excellent review of this topic (2). They point out that diabetes itself is a risk factor for osteoporosis despite the higher bone mineral density that is often seen in diabetic patients. The reasons for this are multifactorial and only partially understood. Among the drugs used to treat type 2 diabetes, the thiazolidinediones (TZD) have most clearly been shown to decrease bone mass and increase fracture incidence (3). The mechanism for the effects of TZD-induced bone loss centers on the action of these agents on the peroxisome proliferator-activated receptor gamma (PPARg). Activation of PPARg by TZD inhibits osteoblast differentiation while enhancing adipocyte and osteoclast differentiation. Interestingly, the effects of TZD on bone are similar to the effects of aging. Sodium-glucose co-transporter 2 (SGLT2) inhibitors are the most recent class of anti-diabetic drug to be approved. SGLT2 regulates renal glucose reabsorption in the kidney. The SGLT2 inhibitors block this action, increasing glucose excretion in the urine to improve glycemic control. However, the use in patients of two approved members of this medication class: dapagliflozin and canagliflozin, has been associated with an increased incidence of fractures (2). Because it received new data about fracture risk, the US Food and Drug Administration (FDA) recently updated the prescribing information for canagliflozin to include a strengthened warning (4). The FDA stated in its safety announcement about canagliflozin that it had “revised the drug label and added a new Warning and Precaution. The additional data confirm the finding that fractures occur more frequently with canagliflozin than placebo, which is an inactive treatment. Fractures can occur as early as 12 weeks after starting the drug. In the clinical trials, when trauma occurred prior to a fracture, it was usually minor, such as falling from no more than standing height.” The FDA also added information about a two-year clinical trial that it required the manufacturer of canagliflozin to perform. This demonstrated a greater loss of bone mineral density at the hip and lower spine with canagliflozin than with placebo. The FDA is continuing to evaluate the risk of fractures with other members of the SGLT2 inhibitor class to determine if additional label changes or studies are needed. These new findings are of concern and demonstrate that patients at increased risk for osteoporosis who are prescribed these medications need to be carefully monitored or considered for treatment with other anti-diabetic agents. Joe Lorenzo, Farmington, CT, USA
  5. While members of the ASBMR generally study bones to learn about the function of their cells or the ways in which their structure and mechanical properties are altered in health and disease, paleontologists and archeologists study fossilized bones to glean information about ancient animals and humans. Until thirty-five years ago, knowledge about ancient species was limited to what could be obtained from the morphology of their bones or the location where the specimens were found. However, advancements in molecular biology and, especially, in DNA sequencing have revolutionized the amount of information that ancient bones can yield. This is highlighted in a series of recent articles in the journal Science (1). The field started in the early 1980’s with PCR cloning of DNA from samples but has progressed in the modern era to high throughput Next-Gen sequencing. Today, vast amounts of information can be obtained about the similarities and differences between present day and ancient plants, animals and humans. To generate this information, scientists have perfected techniques to isolate DNA from ancient specimens and identify the degree to which the ancient DNA was contaminated by DNA from modern species. This has been accomplished with spectacular success within the limitations of the stability of DNA in fossils. So far, the oldest DNA that has been reliably sequenced is from a 700,000-year old horse. DNA from older specimens is too degraded to be of any use, which precludes studies of dinosaur fossils, since these existed from 250 to 66 million years ago. Perhaps the most important information obtained from studies of the DNA in ancient bones is the degree to which modern humans differ from extinct human species. It has now been established that modern Europeans and Asians have inherited 1% to 3% of their DNA from Neandertals and up to 5% from the related Denisovan species of ancient humans (2). This means that interbreeding among these somewhat divergent groups occurred in the past. A recent study examining the length of the Neandertal sequences in a jaw bone specimen from an ancient human who died 37,000 to 42,000 years ago identified several segments of Neandertal DNA sequence (some more than 50 million base pairs in length) in the specimen that otherwise was predominantly (89 to 95%) modern human DNA. This finding implies that an ancestor of the man whose jawbone the DNA was extracted from had an ancestor who mated with a Neandertal within four to six generations of his birth (less than 200 years). Next-Gen sequencing is also allowing more precise mapping of the migration of the ancestors of native North and South Americans from Asia and the relationships between Native Americans and the native peoples of Australia and Melanesia (a group of islands northeast of Australia) (3). Most fascinating may be studies of the function of Neandertal DNA sequences that are preserved in the modern human genome. It has been suggested that these are preserved because they impart an evolutionary advantage that allowed modern humans to make a significant leap in their adaption to hostile environments as they migrated from central Africa throughout the world (4). The technology of Next-Gen DNA sequencing is rapidly evolving. As it becomes less expensive and more available, it should allow us to understand even more secrets of our evolution and the nature of our ancient ancestors. Joe Lorenzo Farmington, CT, USA
  6. Aging is a complex process, which was once considered inevitable and irreversible. However, studies in mice, using heterochronic parabiosis, argue that some effects of aging may, in fact, be reversible. In this model the vascular system of a young mouse is connected to the vascular system of an aged mouse. These studies have been particularly informative about the role that aging has in the response to injury. In 2005 Conboy et al. (1) found that delays in the regenerative potential of muscle stem (satellite) cells and liver hepatocytes in aged mice could be reversed when these animals were parabiosed with young mice. These studies established that some effects of aging on progenitor and mature cells are reversible. They also demonstrated that the factors that produced these effects were transferrable through the circulation. Using lineage tracing it was shown that the tissue regeneration phenotype of aged tissue after heterochronic parabiosis was due to the responses of resident cells and not to the responses of cells that transferred into the aged mice from the younger animal. In subsequent work some researchers identified growth differentiation factor 11 (GDF11), a member of the BMP/TGFβ superfamily, as a protein in serum that restored the young features of cardiac (2) and skeletal muscle (3) in aged animals. However, others have disputed these results (4). Hence, the role of GDF11 in the response of aged cells to heterochronic parabiosis is controversial. Now Baht et al. (5) has used heterochronic parabiosis to examine whether the differences in the repair response to fracture between young and old mice are reversible. They were interested in this question because the rate of fracture repair in old mice is significantly slower than in young mice. These authors found that heterochronic parabiosis reversed the fracture repair phenotype of aged mice and enhanced their osteoblastic differentiation capacity. Furthermore, they demonstrated that the capacity to revert the fracture repair process and the osteogenic potential of aged animals to that of young mice occurred when bone marrow cells from young animals were adoptively transferred into old mice. Hence, bone marrow cells appear to produce factors that reverse the aging response during fracture repair and enhance the rate that bone is formed. It did not appear that aged mice produce factors that diminish the fracture repair process to any great degree since transfer of older bone marrow cells into young mice had only a small inhibitory effect on the fracture repair process. Using a targeted osteoblast ablation strategy, the authors found that destruction of osteoblasts in the host prevented the repair process, while ablation of osteoblasts in the graft had little effect on the rate of fracture repair. This experiment demonstrated that the interaction of young bone marrow with aged mesenchymal cells is critical for the reversion of the fracture response from that seen in aged mice to that seen in young animals. Exactly what the factor or factors are that young animals produce to enhance the fracture repair process remains unknown. However, the fact that these experiments demonstrate their existence provides hope that they will eventually be identified. It is therefore possible to envision a future in which diseases of aging such as fracture nonunion or the poor osseous integration of orthopaedic or dental implants into bone may be treated with agents that mimic the effects of youthful serum on the fracture repair process. Joe Lorenzo, Farmington, CT, USA
  7. The reproducibility of published data remains a critical determinant of the quality of any research. Unfortunately, some scientific articles are published and subsequently their results are found to be difficult or impossible to reproduce. When this occurs, it affects all scientists because it erodes the public’s trust in science and scientists. The majority of recently disputed articles have concerned preclinical or basic science topics. In response to this issue, Francis Collins, the Director, and Lawrence Tabak, the Principal Deputy Director, of the US National Institutes of Health wrote an article in Nature (1) in which they proposed a number of steps to mitigate this problem. Firstly, they stated that irreproducibility is rarely caused by scientific misconduct. Rather they said that: “a complex array of other factors seems to have contributed to the lack of reproducibility. Factors include poor training of researchers in experimental design; increased emphasis on making provocative statements rather than presenting technical details; and publications that do not report basic elements of experimental design”. In their excellent article last fall in the JBMR (2) Stavros Manolagas from the University of Arkansas for Medical Sciences and Henry Kronenberg from the Massachusetts General Hospital and Harvard Medical School went over this issue in detail as it relates to the bone field and they suggested several potential solutions to this problem. In response to this controversy, the NIH released its Principals and Guidelines for Reporting Preclinical Research in November of last year (3). These were accepted by a number of journals including Nature, Science and Cell as requirements before articles could be published in those journals. The Guidelines contained a large number of recommendations. These included: 1) Encouraging the use of standards within each scientific discipline for nomenclature and reporting and 2) Requiring that: a) each study list the number of times experiments were repeated; b)statistics be fully reported; c) a statement be included about whether samples were randomized and how this was accomplished; d) there be clarity about whether experimenters were blinded to the treatments and outcome assessments; e) authors state how the sample size for each group in an experiment was determined and f) the criteria used to exclude data be clear. The Guidelines also stipulate that, at a minimum, all datasets on which the conclusions of the paper were made be available upon request, where ethically appropriate, to the editors and journal reviewers and upon reasonable request immediately after publication. Finally, the Guidelines obligate journals to consider articles for publication that refute their published papers using the same standards for acceptance that were used for the original publication. However, the Principals and Guidelines have not received a universal endorsement. John Haywood, the President of the Federation of American Societies for Experimental Biology (FASEB, of which ASBMR is a member) recently wrote to Lawrence Tabak at NIH about concerns that FASEB had with the Guidelines (4). Haywood agreed that “Guidelines to encourage uniform reporting of data and experimental methods are valuable to the scientific community and the public”. However, FASEB concerns were 1) that the NIH Guidelines were “a “one size fits all” list” that “could become burdensome, overwhelming, and ultimately ineffective if it requires everyone to report every factor regardless of its relevance to a particular kind of research”; 2) that the NIH Guidelines are too rigid. Haywood states “Biomedical research is a vast enterprise with substantial variety in statistical methods, data types, and best practices within and between disciplines. To achieve the stated goals of enhancing rigor, reproducibility, robustness, and transparency, guidelines must allow flexibility and discretion by journal editors and reviewers”. Finally, FASEB was concerned that the guidelines represent an “increased administrative burden for researchers and reviewers” that may weaken scientific peer review. So where do we go from here? Clearly, there is a need to diminish the likelihood that irreproducible data will be published. This means that some version of the Guidelines is inevitable. Scientists need to engage in a dialogue with funding agencies and journals to produce a set of standards for our profession that are fair, effective and the least burdensome possible. Achieving such standards should be a two-way street that ultimately improves published science and the public’s faith in scientists. Joe Lorenzo, Farmington, CT, USA
  8. In a previous blog, I highlighted the ease with which CRISPR/Cas9 technology could be used to edit DNA sequences in almost any cell (1). At my institution the gene targeting and transgenic mouse facility is exclusively using CRISPR/Cas9 to engineer its mouse models and I suspect the same is true at any number of centers around the world. The advantage of this technology is that genetically manipulated mice can be created in half the time and at almost half the cost of the previous technology, which employed homologous recombination in embryonic stem cells. CRISPR/Cas9 also eliminates the need to breed chimeric mice for multiple generations into inbred strains before experiments can be performed on them. However, in my previous blog I also mentioned the “eugenic potential of the technology to engineer almost any polymorphism into the embryo of an organism”. My concern was that “The era of “bespoke” genes is now at least within the realm of possibility and it will be a challenge of both science and society in general to apply this technology in an ethical way.” Recently, papers in both Science and Nature have laid out position papers on just this topic. Edward Lanphier and colleagues argue in Nature (2) that “There are grave concerns regarding the ethical and safety implications of this research. There is also fear of the negative impact it could have on important work involving the use of genome-editing techniques in somatic (non-reproductive) cells.” Because of these fears his paper called for a moratorium on experiments that manipulate human embryos and other reproductive cells. These authors were concerned with quality control of the genetic manipulation and the fear that we don’t know enough about the long term implications of the technology to justify its use to create human embryos that would produce live births. In an accompanying article in Nature (3), David Cyranoski mentions that there are suspicions that some researchers have already created human embryos with edited genomes and that papers describing such work have been submitted for publication. Lanphier et al urge that a dialogue be started, “both to establish how to proceed in the immediate term, and to assess whether, and under what circumstances — if any — future research involving genetic modification of human germ cells should take place. Such discussions must include the public as well as experts and academics.” In the Science article David Baltimore and others report on a meeting that was held in Napa, California in January 2015 to discuss the ethical and moral implications of the use of CRISPR/Cas9 technology in human genetic engineering (4). Similar to the Lanphier article, these individuals also strongly discouraged “any attempts at germline genome modification for clinical application in humans, while societal, environmental, and ethical implications of such activity are discussed among scientific and governmental organizations.” They emphasized that there need to be, “forums in which experts from the scientific and bioethics communities can provide information and education about this new era of human biology”. In addition, they want to “encourage and support transparent research to evaluate the efficacy and specificity of CRISPR-Cas9 genome engineering technology in human and nonhuman model systems”. Clearly, we are at a crossroads. The technology to rapidly manipulate genetic material in cells is readily available, relatively inexpensive and not particularly difficult to implement. However, the general public has only just become aware of its implications. As with many scientific revolutions there is a tremendous potential to benefit mankind as well as grave concerns about its appropriate use. Consequently, there exists a pressing need to establish ethical guidelines for employing these tools in ways that facilitate their potential to benefit us while, simultaneously, protecting us from their unethical use. Joe Lorenzo, M.D. Farmington, CT, USA
  9. Steven M Krane, M.D., a founding member of the American Society for Bone and Mineral Research and its 4th President passed away on January 19, 2015 at age 87 after a long illness. Steve was a giant in our field whose contributions to both skeletal biology and rheumatology are immeasurable. He was originally from New York City, where he received both his undergraduate and medical degrees at Columbia University. However, he began his long affiliation with the Massachusetts General Hospital and Harvard University directly after his graduation from Columbia’s College of Physicians and Surgeons. At MGH Steve was first a medicine resident, then a research fellow in the Thyroid Unit and ultimately, a Chief Resident in Medicine. After his clinical training he began his faculty career in the Endocrine Unit of the MGH. Steve eventually decided that the focus of his career was to be rheumatology and in 1961 he became the Chief of the MGH Arthritis Unit, a position that he maintained for 39 years. He concentrated his science on the mechanisms through which connective tissues degraded. In two papers in Science in 1967 Steve pioneered the concept that connective tissue degredating enzymes had critical roles in human diseases. He was the first to show that Paget’s disease patients excrete collagen fragments in their urine, demonstrating the breakdown of bone that is so active in that disease. He also identified synovial-produced collagenase as an important factor in the joint destruction that characterizes rheumatoid arthritis. Over the course of his long career Steve would go on to define the roles of a variety of collagenases in human disease and develop a series of innovative models to test these roles. His CV lists 191 publications and he maintained a NIH-funded laboratory into his 80’s. Steve was brilliant. One only had to present data to him to appreciate how quickly his mind worked. I had the opportunity to present my work to him a number of times and I rapidly came to appreciate (and sometimes fear) his remarkable ability to organize data and identify both the strengths and the weaknesses of any argument or conclusion. He was totally without pretention and would tell you point blank whether he thought what you were trying to sell was valid or worthless. There was no sugarcoating with Steve. However, he would do this because he was trying to be constructive. One only has to scan the illustrious list of investigators that he mentored over his career to appreciate how strong an influence he had on the development of talented scientists in our field. Steve loved life and never lost his appreciation for the wonder of scientific discovery. He would become excited by good science and he encouraged any number of investigators to purse discovery both in his own lab and in labs around the world. He was also funny, personable and caring. We would occasionally take walks together at ASBMR meetings and he was always interested in what I was doing and the implications of my work. Despite his sometimes-gruff demeanor, he retained an element of child-like wonder. I remember very early in my career being at the Gordon Conference on Bones and Teeth and watching Steve and his good friend, Lou Avioli, cavorting together like kids at summer camp, which, to a degree, they were. Steve once said something to me that I have always appreciated. He told me that I was serious about my science, which for Steve was high praise. He will be greatly missed. Joe Lorenzo, M.D. Farmington, Connecticut, U.S.A.
  10. Most available therapies for osteoporosis function as inhibitors of bone resorption. However, in patients with very low bone mass or those who already have a fragility fracture, anabolic therapies, which enhance bone formation, may be more effective (1). Currently, only analogs of parathyroid hormone are approved as bone anabolic agents to treat osteoporosis. The discovery that the canonical Wnt beta-catenin signaling pathway stimulates bone formation has identified a number of new drug targets, which may be exploited to develop novel anabolic therapies for osteoporosis (2). Wnts are signaling proteins that are produced in the bone microenvironment. They bind specific receptor complexes on osteoblast lineage cells and enhance the development and function of mature osteoblasts. The receptors for canonical Wnts include the protein frizzled and its co-receptors, which are either low-density lipoprotein-related proteins (LRP) 5 or 6 (3). There are a large number of Wnt proteins that function as ligands for the canonical Wnt signaling pathway. In addition, there are inhibitors of Wnt signaling that are produced in bone and modulate the effect of Wnt proteins. Among these are sclerostin and dickkopf-related protein 1 (DKK-1). Both bind LRP 5/6 and prevent canonical Wnts from interacting with their cognate receptor complex. Much attention has focused on the potential of antibodies to sclerostin to act as a bone anabolic therapy (4). By binding specifically to sclerostin, these antibodies prevent sclerostin from binding to frizzled-LRP5/6 receptors and enhance bone mass. A phase two clinical trial demonstrated that an anti-sclerostin antibody had a significant anabolic effect on bone mass in humans (5). However, the mechanisms by which sclerostin inhibits osteoblast development and function is incompletely understood. In 2011 Leupin et al demonstrated that LRP4 is also involved in the ability of sclerostin to inhibit osteoblasts (6). These authors screened proteins that directly bound sclerostin. They identified LRP 4 as a sclerostin-binding protein, which facilitated the ability of sclerostin to inhibit canonical Wnt signaling. They also examined two patients with sclerosteosis, which is typically caused by a deficiency in sclerostin production and is characterized by an abnormally high bone mass. Significantly, they discovered that their patients actually had mutations in their LRP4 gene, which prevented the binding of mutant LRP 4 to sclerostin (6). In a recent follow up paper this group examined mice with conditional deletions of LRP 4 either in both osteoblasts and osteocytes or just in osteocytes (7). They found that, like patients with LRP 4 mutations and sclerosteosis, both LRP 4 conditionally deleted mouse models had increased bone mass and rates of bone formation. Most importantly, the authors also developed antibodies that specifically targeted the region of LRP 4, which bound sclerostin. Antibody binding occurred without affecting the interaction of LRP 4 with agrin and muscle-specific kinase, which is important for neuromuscular junction signaling. When injected into rats, these antibodies stimulated bone mass and increased bone formation rates. Hence, they show promise as a novel bone anabolic therapy. These authors also found that both conditional deletion of LRP 4 in mice and the prevention of LRP 4 binding to sclerostin with anti-LRP 4 antibodies in rats markedly increased serum sclerostin levels. Furthermore, neither model was associated with increased levels of sclerostin mRNA in bone. These results suggest that LRP 4 acts as a binding protein, which maintains sclerostin in the bone microenvironment. In the absence of sclerostin-LRP 4 interactions, much more sclerostin is free to circulate in serum. Hence, measuring serum sclerostin may be a poor surrogate for evaluating the local production of sclerostin in bone and the rate of bone formation. In situations in which bone LRP 4 levels change, serum sclerostin levels may vary without an alteration in sclerostin production in bone. Whether these results translate into a new therapeutic option for patients with osteoporosis remains to be seen. However, they are intriguing and provide new insights into how canonical Wnt signaling pathways regulates bone mass. Joe Lorenzo Farmington, Connecticut, USA
  11. Trying to perform significant nutritional research on humans may be the most difficult endeavor that clinical scientists ever attempt (1). Truly meaningful data about the relationship between what we eat and how it affects our health frequently takes many years to produces results. The gold standard of clinical research, the randomized control trial, is very difficult to perform when testing nutritional hypotheses. This is because of the variability in compliance that inevitably occurs when subjects are attempting to maintain a specific diet for long periods and the considerable costs that are associated with such a study. Hence, it is no wonder that there are multiple theories about what constitutes an appropriate diet for bone health. It is clear that there are minimum requirements for vitamin D and calcium to maintain healthy bones. This is best demonstrated by the discovery in the early twentieth century that infants and children need adequate dietary vitamin D and calcium to prevent rickets (2). In 2010 the Institute of Medicine provided recommended daily dietary allowances for calcium and vitamin D based on their review of the available evidence (3). However, in which form we obtain either of these has been the subject of some debate. There is conflicting evidence that calcium supplements have adverse effects on health (4, 5), so many healthcare providers have been recommending that dietary calcium be obtained through food, which most often means dairy products like milk or its derivatives, cheese and yogurt. However, recently there have been commentaries in the popular press which question the utility of milk consumption in adults (6, 7). In support of these questions, a meta-analysis by Bischoff-Ferrari et al found no significant association between milk consumption and hip fractures in adults (8). Similarly, Michaëlsson et al found that milk consumption in adults in Sweden did not decrease the risk of fracture in men or women and may actually have increased fracture rates in women and mortality in both men and women (9). The effect of milk consumption during adolescence on the risk of fractures in adulthood is probably clearer but is not without conflicting results. Multiple retrospective studies have reported that milk consumption in childhood and adolescence are inversely related to the risk of fractures in adulthood (10). However, a recent report by Feskanich et al found the opposite outcome in men, which the authors partially attributed to the greater height achieved by men who consume higher amounts of milk as adolescents (11). So what do we do with all this data? I think there is a consensus that there are minimum requirements for vitamin D and calcium to maintain bone health during childhood, adolescence and adulthood as outlined in the recent Institute of Medicine report (3). Dairy remains an excellent source of these nutrients. However, exceeding the threshold needs may not provide additional benefit and may have adverse effects. Healthcare providers should be aware that excessive supplementation of any nutrient is not without risks and should make their patients aware of such risks. Joe Lorenzo, Farmington, Connecticut, USA
  12. The ability to manipulate the genome of bacteria and viruses is now a common tool of experimental biology. More recently, technologies have emerged that allow the development of genetically manipulated animals for use in research and commercial applications. However, until recently, production of genetically manipulated animals was an expensive and laborious process that relied on spontaneous homologous recombination in embryonic stem cells. It was not unusual for the cost of this technology to exceed $20,000 and the process to take over 12 months to complete. In the past few years, technologies for directing the targeting of mutations to specific sequences and rapidly incorporating novel DNA into the genome have been developed and are revolutionizing our ability to manipulate and study genes. Among these are zinc finger nucleases (ZFNs), transcription-activator like effector nucleases (TALENs) and the clustered regularly interspaced short palindromic repeat (CRISPR)-Cas-9 system. Of the three, CRISPR-Cas9 technology shows the most potential and has generated the most interest (1). The system was first identified as a mechanism for adaptive immunity in eubacteria and archaea. It was discovered that some bacteria store short DNA sequences from the genome of viruses that have previously invaded a cell or its ancestors and use these sequences to inactivate similar pathogens that are attempting to attack the cell. However, CRISPR-Cas9 is one of those discoveries of pure basic research that is now found to have wide-reaching implications. In the last two years it has been demonstrated that this system could be easily used to manipulate mammalian genetic material in numerous species. Cas9, which stands for CRISPR associated protein 9, is an RNA-guided DNA nuclease enzyme. It cuts DNA at a site that is directed by “guide RNA”. In bacteria this guide RNA is transcribed from the stored short pathogen sequences in the bacterial genome that are the result of previous infection. However, for genetic engineering the guide RNA is generated artificially. The elegance of the system is its high specificity for introducing double stranded cuts in DNA. Co-introduction of targeting vectors, which contain DNA sequences that direct cells to incorporate specific new DNA sequences into the genome at the Cas-9-cut site, facilitates highly efficient, targeted DNA manipulation. Recent use of this technology has seen success in generating both “knock out” and “floxed” mice (2) for use in defining the in vivo function of genes with global or targeted gene deletion. The theoretical advantages of creating animals using this technology over other available methods are the ease of preparing specific reagents and the significant decrease in associated costs and time for generating animals. The technology also permits the introduction of genetic mutations in species for which embryonic stem cells are not available. However, the potential of the technology is only beginning to be explored. Already, there is talk of using the technologies to control malaria by creating “gene drives” (3). These are DNA sequences, which can be introduced into the genome of a mosquito. Using CRIPR-Cas9 technology, such “gene drives” can be constructed to contain all the information necessary to create targeted genomic mutations for malaria resistance. The goal would be to release the resultant transgenic mosquitoes into the wild with the hope that the genetically manipulated mosquitos would mate with wild mosquitos. Because of the engineered sequences in the “gene drive”, all subsequent progeny from the matings of transgenic mosquitoes would be resistant to malaria infection. In an organism like the mosquito, which has a relatively rapid reproductive cycle, it might be possible to make all members of the species resistant to malaria infection within a few years. However, the environmental consequences of such a process have yet to be explored and this needs to be well developed before such a project is allowed to proceed. Equally challenging is the prospect of using this technology to manipulate animal or human embryos or mature tissue to correct genetic diseases (4). It is theoretically possible for the CRISPR-Cas9 system to correct diseases like sickle cell or cystic fibrosis, which are caused by single base pair mutations. More frightening is the eugenic potential of the technology to engineer almost any polymorphism into the embryo of an organism. The era of “bespoke” genes is now at least within the realm of possibility and it will be a challenge of both science and society in general to apply this technology in an ethical way. Joe Lorenzo Farmington, CT, USA
  13. Most medical research is funded by either government or philanthropic organizations. The vast majority of these make their funding decisions based on a rigorous peer review of the scientific merit of a specific project. In the United States the National Institutes of Health (NIH), by far the largest source of funds for biomedical research, dispenses over 30 million dollars each year. The majority of grant proposals to the NIH are reviewed on a multipoint scale, which examines both the project and the investigator. However, for most of these it is the merits of the proposed project that carries the greatest weight. NIH peer review of most grants is characterized by a rigorous analysis of the strengths and weaknesses of an application and requires the applicant to provide great detail about the rationale for the study, its potential impact, the anticipated outcomes and the pitfalls of the proposed research. However, not all granting agencies base their decisions to fund on a project-based paradigm. In the United States the Howard Hughes Medical Institute (HHMI) has, for most of its existence, based its funding decisions on the productivity and creativity of the investigators it supports. The rationale for this approach is that such a program frees scientists to be more creative and risk-taking, while decreasing their need to write multiple grant applications. In 2004 the NIH Director’s Office initiated the Pioneer Award Program as an experiment to see if a similar approach would produce more innovative and higher impact science. A 2012 report to the NIH concluded that this was the case (1). Based on the success of the Pioneer Award Program, the NIH is now encouraging all of its 27 institutes and centers to initiate similar programs. In a recent Science magazine news account, Sally Rockey, NIH Deputy Director for Extramural Research, was quoted as saying the objective of this program is to “…unbridle scientists a bit”, allowing them to “…step off the grant treadmill” (2). The National Institute of General Medical Sciences is currently seeking feedback about the possibility of giving all its investigators (about 3300) the ability to switch from a project-funded award to one that is longer-term and based more on productivity. The National Cancer Institute recently announced that it would begin to fund a new 7-year award that will replace project-based grants. It eventually hopes to fund about 16% of its budget through such awards. Other NIH institutes are being more cautious and, as reported in Science magazine, NIH Director Francis Collins noted that the Wellcome Trust in the United Kingdom shifted a few years ago to a more investigator-focused awards process but is now reevaluating this strategy. One concern is that it takes many years to establish a productive and creative track record. Hence, focusing funding decisions on an investigator-centered strategy may skew awards to more senior researchers at the expense of junior scientists. Hence, particular effort needs to be made to identify junior investigators of high potential and then invest in them early in their careers. The Howard Hughes Medical Institute has been successful in this approach and any attempt by NIH to initiate an investigator-oriented funding program needs to take similar care to nurture junior investigators. In truth, both the investigator- and project-oriented approaches have flaws as well as unique advantages. The ideal funding program would be one that is a mixture of both, which, I suspect, is the goal of NIH Director Francis Collins in exploring this initiative. It will be interesting to see how it develops and affects United States biomedical research. Joe Lorenzo Farmington, CT, USA
  14. Vitamin D is essential for bone health. However, significant controversy exists about which technology should be used to assays vitamin D levels in serum, what values constitutes evidence of insufficiency and whether vitamin D levels should be routinely determined in the general population as part of a preventative screening program. In 2010 an Institute of Medicine (IOM) panel issued guidelines, which stated that most adults 18 to 70 years old need to consume 600 IU of vitamin D per day while those older that 70 should consume 800 IU per day (1). The IOM panel also stated that total serum 25 hydroxy (25 OH )-vitamin D levels of greater than 20 ng/ml (50 nmol/l) were adequate to maintain bone health for the vast majority of people. In contrast, the Endocrine Society, while also defining deficient individuals as those with serum 25 OH vitamin D levels below 20 ng/ml, created a category of “insufficiency” to describe individuals with serum 25 OH vitamin D levels between 20 and 30 ng/ml (50 to 75 nmol/l) (2). The Endocrine Society also did not recommend routine screening for vitamin D deficiency in the general population of healthy individuals who are not at risk. Now the United States Preventive Services Task Force has issued draft recommendations, which can be commented upon until July 21, 2014, regarding whether or not routine screening for vitamin D deficiency in the general population is desirable (3). They conclude “…that current evidence is insufficient to assess the balance of benefits and harms of screening for vitamin D deficiency”. The panel cited numerous issues, which led them to their conclusion. These include that: 1) there is no consensus definition of vitamin D deficiency, and the optimal level of total serum 25-hydroxy vitamin D is debatable. 2) there is a lack of studies using an internationally recognized reference standard. The panel found “evidence suggesting variation in results between testing methods and between laboratories using the same testing methods”. In addition recent evidence in African American populations in the United States argue strongly that serum bioavailable rather than total 25 OH vitamin D levels are a more accurate assay of vitamin D stores in individuals. However, assays for bioavailable 25 OH vitamin D are not currently readily available except in research laboratories. 3) the published evidence argued that “treatment of asymptomatic vitamin D deficiency has no benefit on cancer, type 2 diabetes, mortality in community-dwelling adults, and risk for fractures in persons not selected based on having a high risk for fracture.” The panel did state that the risks of treating individuals with vitamin D using conventional dosing were low. However, consumption of vitamin D with calcium in conventional doses may be associated with an increased risk for kidney stones, although this does not appear to be true for treatment with vitamin D alone. In contrast, very high dosing of vitamin D, leading to total serum 25 OH vitamin D values of greater than 200 ng/ml (500 nmol/l), may cause hypercalcemia, hyperphosphatemia, suppressed parathyroid hormone levels, and hypercalciuria. Given the current enthusiasm for vitamin D testing (the number of assays for serum 25 OH vitamin D increased by 50% between 2008 and 2009), these recommendations suggest that most physicians in general practice should first evaluate the risk that an individual might be vitamin D deficiency before ordering the test. Individuals with high risk include those with poor intake of vitamin D in food or from supplements, a history of malabsorption, a history of using medications that decrease serum vitamin D levels such as certain anti-seizure drugs or a history of limited sunlight exposure. Joe Lorenzo Farmington, CT, USA
  15. Bone is a highly vascular organ. It has been known for many years that endochondral bone development coincides with angiogenesis in bone (1). However, the molecular signals that regulate the development of bone and its vascular system and the importance of their interactions for bone development has only recently been appreciated. It is now recognized that bone cells make factors that regulate angiogenesis. Among these, vascular endothelial growth factor (VEGF), a major stimulus of angiogenesis, plays an important role. Additional studies have shown that production of VEGF in a variety of tissues, including bone cells, is regulated by a family of transcription factors termed hypoxia-inducing factors (HIF) (2). These heterodimeric proteins are composed of alpha and beta subunits. There are three HIF alpha subunits whose protein levels in cells are regulated by the oxygen tension in that cell. Perhaps the most studied is HIF1α. Previous work demonstrated that targeted overexpression of HIF1α in the osteoblasts of mice increased their bone mass (3). Conversely, deletion of HIF1α in osteoblasts decreased bone mass (3). Hence, HIF1α production in osteoblasts appears necessary for normal bone development and growth. In addition, it was shown that estrogen destabilized HIF1α protein in osteoclasts and, in this way, prevented osteoclast activation and bone loss (4). In mice with a targeted deletion of HIF1α in osteoclasts, estrogen withdrawal after ovariectomy (a model of the post-menopausal state) did not cause bone loss as it did in normal mice. Now two papers from the laboratory of Ralf Adams (5, 6) have examined the role that endothelial cells, which line blood vessels, play in bone development. These authors identified specialized endothelial cells in the metaphysis of mouse bones, which they termed H to distinguish them from endothelial cells in the diaphysis, which they term L. H endothelium was relatively unique to metaphsyl bone and could only be found in one other organ, the liver. Furthermore, osteoprogenitor cells, which form osteoblasts and osteocytes, uniquely clustered around type H but not type L endothelial cells. Most fascinating, aging was associated with a decreased numbers of type H but not type L endothelium in bone. This group then examined whether production of HIF1α in endothelial cells regulated the development of type H endothelium and bone mass. Using targeted deletion and targeted enhanced expression of HIF1α in the endothelial cells of mice, they demonstrated that deletion of HIF1α markedly decreased H type endothelium and bone mass without affecting L cells. Conversely, enhanced expression of HIF1α in endothelium increased H cells and bone mass. This group also identified notch signaling in endothelial cells and noggin production by these cells as being involved in the coupling of endothelial cells with osteoprogenitor cells. Significantly, these authors showed that treatment of mice with the iron-chelating drug, deferoxamine mesylate, which inhibits HIF1α degradation and increases intracellular HIF1α protein levels in cells, caused both H type cells in the metaphysis and bone mass to increase in mice. Similarly, Liu et al (7) found that treatment of mice with drugs, which increase HIF1α levels, increase bone mass in ovariectomized mice and Wan et al (8) showed that analogous therapy enhanced bone regeneration in a distraction gap model in mice. Overall, these studies add much to our understanding of the interaction of bone with its blood vessels. There is some contradiction in the mouse data, which needs to be resolved, since Miyauchi et al (4) found that inhibiting HIF1α selectively in osteoclasts prevented ovariectomy-induced bone loss while Liu et al (7) found that globally enhancing HIF1α with drugs increased bone mass in this condition. Iron-chelating drugs are already approved to treat humans with conditions associated with iron overload. Most exciting is the prospect that these or other agents, which globally increase HIF1α levels in cells, may have utility as agents to treat osteoporosis and other diseases associated with a relative decrease in osteoblast activity. Joe Lorenzo, Farmington, CT, USA