Rapid electrical discharge experiment

17 November 2020.  We ran an experiment in the middle-school science lab today.  We discharged a big electrical capacitor through a tiny wire.  The energy deposited in the wire obliterated it.  We went way beyond the “Gee! Wow!” aspect of the experiment.  We use lots of physics and math in analyzing the results.  We use equipment uncommon in school labs for setting up the experiment and measuring the voltages.  We also used a high-speed video camera to catch the demise of the wire in action.  This revealed interesting details, including the huge power and huge current coming out the capacitor.

The write-up with lots of images is here:

Learning from a demonstration of an electrical spark vaporizing a bit of copper wire

Spinlaunch is demonstrably impossible

An investment in an impractical technology

Summary of the impracticality of Spinlaunch

The New Mexico Spaceport (https://www.spaceportamerica.com/), funded by taxpayers, started with Virgin Galactic’s space tourism entity as its anchor tenant. It has gained other tenants, thought not yet economically sustainable; space tourism may start in late 2019.

One new tenant is Spinlaunch, a company from Sunnyvale, California (http://www.spinlaunch.com/). They’ve raised $40M from investors (https://www.bloomberg.com/news/articles/2018-06-14/this-startup-got-40-million-to-build-a-space-catapult) including Google Ventures (now GV) and Airbus Ventures for a speculative technology, which I shall describe shortly below. They propose to use a large spinning platform to launch satellites from the ground (which must be with a rocket to complete the boost).

The idea sounded preposterous to me, so I worked out the limitations, which I claim are solidly against this being practical or even possible. Join me now:

The basic technology proposed is:

  • A vacuum chamber with a radius of about 50 m (Bill Gutman from Spaceport America let out the knowledge that my first estimate of 500 m was “an order of magnitude too high;” pushing on a completely unrealistic guess helped spring this information loose).
    • Bill says it’s patented, but there’s only a patent application dated July 2018, US 2018/0194496 Al to Jonathan Yaney.
    • I note that a patent says nothing about the practicality of an “invention.” Patent examiners are not allowed to decide on issuing a patent based on practicality. I note that Henry Latimer Simmons obtained patent 536,360 for a ludicrous invention to let one train pass over the top of another on one track.
  • Placing the satellite with its rocket motor on the periphery and spinning up to a tangential speed of Mach 4-5, as Bill cites. Yes, it could not be to LEO (Low Earth Orbit) speed; of course, the satellite would burn up on launch here in the lower atmosphere.
  • Upon launch a rocket engine ignites to reach the speed for attaining LEO.

Calculations:

  • I’ll take the lower speed, Mach 4, about 1,320 m s-1 to give the least stressful conditions.
  • At a radius r = 50 m and a speed v = 1320 m s-1, the centrifugal acceleration is very simply calculated as v2/r = 34,850 m s-2. That’s very closely 3,500 g! We’re talking about a satellite and its rocket engine withstanding this, including electronics
    • Bill Gutman says that there are already military projectiles that get accelerated to 40,000 to 50,000 g – to get a muzzle velocity of 1,000 m s-1 in a 10-m barrel. The electronics are potted to withstand the acceleration (https://www.raytheon.com/capabilities/products/excalibur).
    • Fine, but:
    • (1) A satellite has to have folded solar panels and antennae. These cannot be potted, and I cannot imagine any folding and cushioning that doesn’t destroy the joints or the panels. The military projectiles only have to deploy small vanes to steer. (I also don’t know how their performance meets specs.)
    • (2) To reach vLEO (calculations below), there has to be a rocket engine. It will have to be a solid propellant engine; the complex plumbing and pumps of a liquid-fueled engine could not possibly survive 3500 g.
    • This engine should really be two-stage. An effective vLEO of over 8,000 m s-1 is needed, with a bit of thrust vectoring to go from horizontal to tangential in the trajectory, as well as to overcome drag in the initial part of the trajectory. I get an estimate closer to 9200 m s-1, not achievable with one stage with solid propellant; see below. The exact calculation of air drag would similar to the math from interceptors such as Nike or the more modern (and low-effectiveness) GMD. So, the additional speed needed is well over 6,700 m s-1.
    • The classic rocket equation expresses the gain in speed (yes, let’s say speed, since direction is not specified and does change) is Δv = vex ln(m0/mf), where vex is the exhaust velocity as determined by the propellant type and m0 and mf are the initial and final masses of the rocket. I’ve written this up, too (https://science-technology-society.com/wp-content/uploads/2018/01/rocket_equation_in_free_space.pdf). We assume the loss of mass is that of propellant. Taking the final hull and payload (satellite) as having a mass of only 10% of the initial mass (90% burn), we get the logarithmic factor as ln(10)=2.3.
    • Solid propellants have only a moderate vex, hitting about 2,500 m s-1. We get Δv=5,750 m s-1. Yes, I’d say that a second stage is necessary.
    • (3) Can a solid-propellant rocket withstand the lateral acceleration? Of course, the rocket has to point up, so the rocket and payload are aligned perpendicularly to the radius. There is an enormous bending force exerted on the rocket body. The force also gets relieved almost instantaneously on launch, generating a change in acceleration called, appropriately, jerk. This sets parts of the launched item into sharp motion – like your innards if you’re in a high-speed traffic accident.
    • (4) How big a satellite can be launched, given materials limitations? There are some small satellites, e.g., the CubeSats, but they have economical and reliable launches already on standard rockets. For more practical sizes, I’m not about to do the engineering calculations to estimate the stresses on the launch platform and the safety factor. This assumes that the payload and its own rocket survive, which I flatly reject, as above. I note that:
    • (5) The whole idea was to save energy and cost in launching satellites. There’s a lot of energy put into the launch mechanism, far more than the kinetic energy imparted to the (putative) rocket + payload. Maybe some could be recovered in electromagnetic braking…needing a significant amount of electrical storage and circuits to handle massive currents.

There are other niceties:

  • Consider the extremely active timing needed to release the rocket + payload. Suppose we want a launch direction error not to exceed 1o, or 1/57 of a radian. To reach a tangential speed of 1320 m s-1 at a radius r = 50 m, one needs a rotation rate of 1320/50 = 26.4 radians s-1. That’s a bit over 1,500 degrees per second. The window is less than 1 ms wide.
  • Safety: What’s the shield in case the launch mechanism fails, sending out shard at high speed? How about releasing the rocket + payload nearly horizontally by accident? You need a BFS, a big functional shield.
  • How about the reaction of the spinning platform when the rocket + payload is released? That’s quite a jolt on the suspension. Maybe some engineers can address that, but not the fundamental no-gos (a neologism?) I’ve noted all through.

Conclusion:

  • This is a pipe dream or a scam, poorly thought out at the very best.
  • Yes, Google Ventures, now known as spinoff GV, and Airbus Ventures are among the investors in this. I can only attribute their lack of due diligence to their lack of sufficient technical expertise – I think Google and Airbus, both technically solid, spun off the MBAs and not the engineers into their Ventures.

The rest of the analysis uses equations set by MathType in Microsoft Word.  These don’t come through in the Mammoth docx converter plug-in to WordPress, so I link here to a PDF version of the rest of the analysis.

 

Fun learning chemical kinetics

Our students in grades 4-7 at the Las Cruces Academy, our non-profit private school, enjoyed tracking the decolorization of phenolphtalein by hydroxide ion.  Phenolphthalein is that beautiful rose-pink indicator dye used in acid-base titrations.  It changes structure a bit in base but then slowly undergoes a permanent decolorization.  The kinetics (rate and order of reaction) are interesting and fun to follow.  A write-up from the 2018-19 school year is here.

Why are plants green?

My answer melds the physics and chemistry of light-driven chemical reactions – photosynthesis – with evolutionary biology and some properties of a star system.  In brief, plants and other photosynthetic organisms need a source of stellar energy (good old Sol, for our Earth) with a good distribution of radiation (a good star temperature).  The spectrum  of visible light is highly appropriate, with energies of its photons or light particles adequate for photochemical reactions but not so high as to be prone to breaking up molecules directly.  A chemical that can capture and  hold the energy from absorbing the photons of that energy level is a great rarity – that’s chlorophyll, and by chance it absorbs red and blue, leaving much of the green.  Now we can delve into intriguing details:

Well, this is not really at the popular science level, as it requires some grasp of how molecules interact with light.  If you’re game for a bit of science beyond that of a fair fraction of Web users, here goes:

First, energy availability: most of the energy in the spectrum of the Sun is in visible light, of wavelengths between 400 and 700 nanometers (nm, billionths of a meter).  (Compare a human hair’s diameter, 80,000 to 100,000 nm.) Extreme blue  light, the shortest wavelength most people can see, is 400 nm; deep red is 700 nm. This range of wavelengths happens to be close to the same at that absorbed by green plants.  It makes evolutionary sense for plants to use this abundant energy source.  We haven’t yet argued why the green isn’t used well, thereby being sent off substantially unused from leaves, to reach our eyes as pretty green light.

Second, energy utility: to  make new, stable, energy-packed molecules – that is, sugars – from other molecules in the environment – water and CO2 – takes the amount of energy that’s carried by the particles of light, the photons, in this range of wavelengths.  That photon energy is E = hν, with ν, or “nu” being the frequency of the light; h is Planck’s constant.  We can quote that in various units in science, as 1.77 to 3.09 electron-volts (ah, battery voltage, which we know can cause chemical reactions by breaking or remaking bonds) .  If you like old English units (alas, not used in science) and you consider a mole of photons – a mole is Avogadro’s number of anything – then that’s 171 to 299 kilojoules or 41 to 71 kilocalories (food calories).  Even with that much energy per photon or mole of photons, it takes a pooling of energy from a couple of photons to move an electron from water onto CO2 when making sugars.

Two sidelights on energy utility: First, there are bacteria that use light of even longer wavelengths, to about 850 nm.  Their photochemical reactions differ from those used by green plants.  That’s about the lowest energy that’s usable.  Second, higher energy photons in the ultraviolet can do photochemistry even more vigorously… too vigorously, as we know.  We get sunburn, all organisms get DNA breakage and other damage.  One great feature of the Earth is the ozone shield – among many special  conditions (see my other essays, about habitable planets and about Proxima Centauri b, in particular).  That layer keeps out most UV light.

Third, energy retention: Chlorophyll has unique photophysical properties.  Consider what happens when a molecule of any type is able to absorb light.  There has to be a jump in molecular motion – rotation, vibration, or the “orbital” motion of its electrons – with a difference in energy from the starting point that exactly matches the energy of the light particle or photon.  OK, visible light is fairly energetic, able to excite changes in electronic motion and even to break bonds in some cases.  We’re considering a Chl (chlorophyll) molecule starting out in what’s called the ground electronic state.  When it absorbs light it jumps into a new electronic state.  There are two states readily available when visible light is absorbed.  One, the first excited singlet state (S1) lies at the difference in energy from the ground state that matches the energy embodied by red light.  There is also a second excited singlet state (S2) at the energy embodied by blue light.  Most organic molecules that absorb electromagnetic radiation that is as energetic as red or blue light don’t hold onto the energy of excitation long.  Particularly if the molecule is complex (and Chl is, having 137 atoms), there are many ways of converting the energy of electronic excitation into other motions of the molecule – and of neighboring molecules – such as vibration and rotation, thus, eventually to heat.  Chlorophyll is a rarity, in that it holds onto the “high-quality” energy of electronic excitation a relatively long time, on the order of billionths of a second.  That’s long enough for the excitation to  be used in a photochemical reaction, the start of photosynthesis.  The excitation can be transferred from one Chl to another, in what’s called nonradiative transfer or Förster transfer – the receiving molecule has an energy level to match that of the donor.

One paragraph with some quantum mechanics:That is, Chl has low rates of loss by internal conversion (to heat) or intersystem crossing (to the first excited triplet state, to which direct absorption is first-order forbidden by quantum symmetry rules, and with which there is a danger of creating damaging singlet oxygen from ground-state triplet O2). G. Wilse Robinson at Caltech had a great course on this topic, using a sophisticated analysis of the density of states (rovibronic states, or quantum energy levels). I wrote about this years ago – see V. P. Gutschick. 1978. Concentration quenching in chlorophyll-a and relation to functional charge transfer in vivo. J.Bioenerg. Biomembr.10: 153-170. Note that absorption spectra are broadened by local environmental interactions of the Chls to cover a good fraction of the solar spectrum, and also there are auxiliary pigments (carotenoids) to fill in even more of the spectrum. So, leaves absorbing 85% of the solar spectrum is a good deal! So, chlorophyll makes excellent use of the spectrum of light from the Sun.

Chlorophyll is then a unique molecule, with a few variant forms (Chl a, Chl b, bacteriochlorophyll).  You can’t fault it for not absorbing light at all the visible wavelengths.  Absorb red and absorb blue and you leave green to be passed on to the eye by reflection or transmission.  So, the whole package of properties comes down to the requirement for a molecule that leaves us the green (did I intend that pun?).

There’s more to say, much more, but here is a short discussion of how and why plants seem to waste a lot of the energy in sunlight.  Chlorophyll and the auxiliary pigments absorb more light in bright sunlight than they can use.  This is an interesting situation in evolution.  Here are considerations:

First, leaves can’t pack in enough enzymes, the proteins that carry out the sugar-making reactions by using the energy of light that’s been captured in the photochemistry, which makes molecules that store energy but are unstable.  The products of photochemistry have to be used in a complex series of nonphotochemical or “dark” reactions.  These reactions use proteins, enzymes, that catalyze the combination of various chemical species along a long and tangled path.  It happens that the enzyme that does the first, critical reaction, combining CO2 with a receptor molecule, ribulose bisphosphate, has a low reaction rate or turnover number.  One molecule of ribulose bisphosphate carboxylase/oxygenase (Rubisco, for short, or a great name for a plant physiologist’s dog) can only process a dozen or so CO2 molecules per second.  There are other enzymes blazingly faster: carbonic anhydrase turns over about 500,000 molecules  per second of CO2 from carbonic acid (good for us air breathers so that we can remove CO2 as a waste product of our metabolism). There’s a good reason for Rubisco being so slow – its reaction is the second most difficult reaction in the biological world, after the reaction catalyzed by nitrogenase enzyme in nitrogen-fixing microbes.  The chemical bonds in CO2 are very tough, almost as tough as those in N≡N or N2 (ordinary nitrogen gas).  It takes real energy to change these bonds. A leaf that could use all the energy in full sunlight would be very thick, with concurrent problems in moving reactants around. Thus, leaves hit a maximal rate of photosynthesis at a modest fraction of full sunlight, maybe 1/20 for some trees to 1/2 for some very robust photosynthesizers  such as the “weed” Camissonia claviformis.  You might look at a couple of my publications for more ideas: V. P. Gutschick. 1984. Photosynthesis model for C3 leaves incorporating CO2 transport, radiation propagation, and biochemistry. 1. Kinetics and their parametrization. Photosynthetica. 18: 549-568, and V. P. Gutschick – and  ____. 2. Ecological and agricultural utility. Photosynthetica 18: 569-595.

Second, and related, is a sort of CO2 starvation of plants, another great story in evolution.  In a somewhat simplified view, plants have been too successful in both photosynthesis and self-protection.  In the latter aspect, they make structural compounds, cellulose and lignins, that are hard for bacteria and fungi to break down, even as versatile metabolically as these organisms are.  As a result, a rather tiny fraction of carbon compounds derived from photosynthetic organisms end up not being fully decomposed before the site of their deposition gets buried geologically.  That’s how we get coal, oil, and natural gas.  Even though we’re now down our damnedest (an appropriate adjective, I offer) to burn these all back into CO2 in the air, the level of CO2 in the air has been dropping for eons, on average.  There are a lot of tales tied to this, such as climate change and Snowball Earth (check it out), but the key thing is that CO2 is at “only” 405 parts per million in today’s air (2018).  It’s a trick for leaves to take up CO2 with only a small driving force (concentration of CO2 outside the leaf relative to inside the leaf). In fact, it’s why plants need so much water.  With their leaf pores, the stomata (little stomach) open, they take up CO2 but lose about 100 to 500 times more molecules of water.  The situation has been getting worse, then, for plants; their water-use efficiency is getting lower.  About twenty times, some plant lineages have evolved a new first step for capturing CO2, the so-called C4 plants (and Crassulacean acid metabolism or CAM plants, similarly). I’ll skip the details here.

Third, the inability to use all the energy flow in full sunlight leads to interesting competitive relations among plants, as well as interesting light signalling in plants.  Plants exposed to full sun, the overstory plants (or their top leaves, at least), spend a lot of time light-saturated.  They use part of the solar energy and dump the rest as heat.  They have to really protect themselves from absorbing too much light and thus getting too much photodamage,.  The leaves have the help of xanthophyll pigments that can absorb excess light and convert it to heat effectively (high rates of internal conversion of the energy of electronic excitation).   Still, thicker leaves with more Rubisco could help, as could having leaves presented at a steeper angle to the sun, thus spreading light out over a bigger area at a lower intensity.  The latter strategy has been used in crop breeding, in what’s called the erect-leaf hypothesis.  Modern varieties of maize (corn, in US palance; Zea mays) have rather erect leaves and are more efficient in using light.  Another  strategy is making  leaves with less chlorophyll.  They’d have lower rates of photosynthesis in their top leaves but could share light with lower leaves.  Using results from the papers I just cited, in a follow-on article (oops; kinda big, at 5 MB, a scan of pages), I proposed that “pale mutants” would have 8% higher yields in dense stands of a monoculture.  This idea was picked up by colleague John  Hesketh at the University of Illinois- and it worked!  (W. T. Pettigrew et al. 1989. Crop Science 29: 1024-1029).)  Crop breeders didn’t go for it – what farmer likes pale green crops?  There is also a very good ecological / evolutionary reason that wild plants don’t embody this – why share a resource, light, with competitors!

Signaling with light: The color quality of light changes as one moves deeper into a plant stand.  There’s more green, less red.  Plants have elaborate photosensors to control where they invest in making leaves and in growing stems.  They don’t respond to green, but to the ratio of red light to far-red light.  Deeper in the canopy, the red light is much reduced, having been absorbed by leaves above.  There is less reduction in the intensity of far-red light, at wavelengths just longer than the absorption edge of chlorophyll.  The red:far-red ratio detected by a plant changes the plant responses.  For a plant evolved to be an overstory plant, intolerant of shade, a low red:far-red ratio triggers a response to elongate its stem to help rise to the top, and at the same time to forgo developing leaves at depth.  There is a vasts literature on such signaling responses in plants.

Those 137 atoms in chlorophyll: about 52 aren’t key to handling the electronic excitation arising from absorbing light.  They are in the “phytl tail,” a long chain of hydrocarbon that helps the chlorophyll molecule imbed in a fatty or lipid membrane.  Photosynthesis has to be done across a biological membrane for several reasons.  Among these are safe separation of electrical charges, ability to recoup some energy by letting charges come back through what’s effectively a turbine to make ATP, the cell’s energy currency.  That’s a whole ‘nother story, not to be pursued here.

One final note here: Since there’s less light deeper in a stand of plants (a “canopy”), should lower leaves have progressively less investment in photosynthetic enzymes and overall mass?  Frits Wiegel and I modeled that in 1984, publishing it later (V.P. Gutschick and F. W. Wiegel. 1988. American Naturalist 132: 67-86).  We came up with a profile of leaf mass per area vs. optical depth in the canopy that looks like that of real plants.  Making hypotheses from some deep physics and chemistry to see if the concepts work out in nature is so much fun, and it’s also of potential use.  I have lots of other plant models and tests published.  You can take a look at https://gcconsortium.com/about_us_founders_qualifications.html.

Addendum: A chemist’s joke: What is Avogadro’s number of avocados?  A guacamole.  Note that this number would fill the oceans about 100 times over.  Have fun working out the math.

Another one: Chlorophyll has 137 atoms.  That’s essentially the reciprocal of the fine-structure constant, which is a fundamental constant in the theory of electromagnetic interactions – which includes, of course, light with electrons in molecules.  I mentioned this to Nobel laureate Murray Gell-Mann once in the cafeteria at Los Alamos, to his amusement.

Pseudoscience about glutathione and nitrate

Glutathione, and other supplements purported to boost our bodies’ ability to make more of it, are touted in the March 2018 issue of the flier from Natural Grocers, here in Las Cruces.  It’s another case of looking for a magic bullet for health.  Actually, they propose a fusillade of magic bullets – turmeric, a raft of vitamins, and more.

Let’s look at glutathione, a known antioxidant, and the claims made by Natural Grocers.  First, they say that glutathione is the most abundant single molecule in our bodies, after water.  Not so!

  • Glutathione: it’s a soluble peptide (simply, three amino acids, linked together).  Its concentration averages about 2.5 mM in cells, higher in liver, lower in other tissues; here, “mM” stands for “millimolar,” thousandths of a mole per liter of solution.  For glutathione  a mole is comprised of 307 grams.  How many moles does that indicate are in our bodies (variably by our size and genetics, etc.)?  Well, a 70 kg person at 70% water contains about 50 liters.  Multiply that by 0.0025 moles per liter and you get 0.125 moles, or about 38 grams.  Now for comparisons to more abundant molecules:
  • Cholesterol:  it’s vitally important in every tissue, especially in the brain, as a component of the membranes in every cell.  I talked about this on my radio show published on YouTube (first and second segments on 19 December 2017).  Consider the brain, alone.  The fresh mass of the brain is about 1.3 kg, and 2.5% of that is cholesterol, or about 32 g.  The brain holds 1/5 of the cholesterol in the body, so the body’s total is about 160 g.  That’s more than four times the mass of glutathione.  It’s also more molecules; the molecular mass of cholesterol is 387 g per mol, so the body contains about 0.42 mol of cholesterol.
  • ATP, adenosine triphosphate, the energy currency of the body.  There’s about 0.2 mol of ATP in the average human body, which is about 100 g, nearly three times as much as glutathione.  Note that we use up and regenerate each ATP molecule about 200 times each day!  Lot’s of energy trading.
  • Myoglobin, the oxygen-storage protein in muscle: It’s about 2.5% of dry muscle mass.  Dry muscle mass is about 30% of fresh muscle mass.  Fresh muscle mass is about 42% of the body in a fit person, or about 29 kg.  Thus, dry muscle mass is about 8.7 kg, and myoglobin is about 220 g, or 0.012 mol.  The mass is about 6 times greater than that of glutathione.
  • The myosin heavy chain, a protein that’s a  major component of muscle, is about 1/6 of dry muscle mass, or about 5 kg or 0.3 mol.  That’s over 130 times the mass of glutathione.
  • Collagen, a mixed protein, is about 25% of dry muscle mass, or about 7 kg.  That’s about 180  times more than glutathione. Collagen holds us together, as it functions also in other mammals; it’s the tough sinews and membranes, familiar to hunters as also to anyone who cuts up chicken for dinner.

     

    Glutathione is important as an antioxidant.  It’s not as abundant as Natural Grocers would have us believe.  It’s also a compound we can and do make naturally in our own bodies, with any normal or even near-normal diet.  Remember, taking hominids as starting 2 million years ago, we survived at least 100,000 generations without supplements in pills.  Sure, our ancestors died much at much younger ages than do most of us, but it was very, very rarely from just lacking glutathione; big predators, infectious diseases, and simple broken bones that hampered both escape and foraging were among the causes.  They didn’t die looking for a Natural Grocers store.

    Natural Grocers recommends that you eat lots of the brassicaceous vegetables – kale, broccoli, cauliflower, brussel sprouts, cabbage, kohlrabi.  Problem: these foods contain abundant goitrogens that interfere with iodine uptake by your thyroid gland.  You can get hypothyroidism!  People have been getting hypothyroidism on eating lots of kale.  Check out a reliable book that covers diet and nutrients, such as the eminently readable On Food and Cooking, by Harold McGee.

That March 2018 issue of Natural Grocers’ sales flier also pushed eating vegetable high in nitrate.  What a twist.  Knowledgeable food experts have warned for years about the dangers of high nitrate levels, which can cause methemoglobinemia, a condition in which the iron atom in the center of the hemoglobin molecule in your red blood cells oxidized to the ferric state, which has low affinity for oxygen.  Eat red beets, turn blue?  Not really, unless you eat a lot of beets.  Note also that high nitrate in vegetables is generally a result of overuse of chemical fertilizers (nitrate itself, or ammonia that gets oxidized to nitrate in the soil), and beets (and lettuce, …) are particularly good at taking up nitrate while not reducing it to biochemically useful ammonia internally.   I’ve done a significant amount of research on the cycling of nitrogen in various forms around the globe, where the ability of different kinds of plants to reduce nitrate to ammonia that usable by the plant (e.g., to make proteins), is a rich and diverse subject. Sure, nitrate is one source for our bodies to make nitric oxide, the molecule critical for signaling among organs – in really small amounts.  Nitric oxide in large enough amounts is toxic, particularly to infants; it’s a free radical, having an unpaired electron ready to make bonds with, well, almost anything, which is not good.

You can also damage yourself with excess vitamins A, D, and B12, or even die, as a British food faddist did recently by overdosing on vitamin A (extreme liver damage).  Excess vitamin C is harmless; you just excrete the excess (and the dollars it represents).

A little knowledge is a dangerous thing. — Attributed to various commentators

Education: that which discloses to the wise and hides from the foolish how little they know. — attributed in various forms to Mark Twain

Zika not alone in affecting fetal brains

Fearsome flaviviruses: A very short note in a recent issue of Science (Vol. 359: 530, 2 Feb. 2018) cites a study from the journal Science Translational Medicine.  Recall the dramatic effects of the zika virus that can infect the developing brain tissue in embryos and fetuses, causing death or, heartbreakingly, brain malformations, notably microcephaly.  Now it appears that West Nile virus (already present here in the US) and Powassan virus may have similar capabilities.  They can grow in the tissues taken from the mother or the fetus.

Establishment of a reservoir of these viruses in a geographic area such as ours is often conditioned on having their insect vectors –  especially mosquitoes – sharing the virus between humans and some forest animals – that is, a sylvatic cycle.  I have more information on this, provided by virus researcher Prof. Kathryn Hanley, in a recording of her visit to me in the KTAL LP FM studio here in Las Cruces, NM.  I made it into a YouTube video.

Densified wood – stronger than steel (but…)

Densified wood: In a very recent issue of Nature (Vol 554: 224-228), authors Jianwei Song and others reported that they were able to make wood into a very dense, very strong and tough material.  They removed some of the lignin polymer and carefully crushed the remainder, mostly cellulose.  The density increased from 0.42 grams per cubic centimeter to about three times that, denser than water.  Basically, they collapsed the open conduits of wood, the xylem vessels that carry water and nutrients upward from the soil.  They were able to layer it in alternating directions of the former grain, like plywood.  Its strength (stress needed to break it) exceeds that of even high-strength steel .  Interestingly, its toughness also increased (this is the energy or work needed to break it); ordinarily, toughness goes down as strength increases (e.g., see my post about spider silk).  They had an interesting demo of toughness with a projectile shot into it.

It’s premature to say this will replace steel as a structural material in many applications.  For one, densified wood swells alarmingly at high humidity, by 8.4% after 128 hours in 95% relative humidity.  It’s not dimensionally stable then.  One topic I didn’t see addressed is its anisotropy – its properties vary with the direction of applied stress.  Even layered in alternating directions like plywood, in the third dimension, parallel to the original grains, I’d expect it to be easier to disrupt – to delaminate, as it were.

Densified wood also is not more resilient than steel.  After hitting the highest stress that it can tolerate, it breaks down bit by bit at higher strains (relative extension).

Stay tuned.

 

Our brains got a lot of mutations while we were in utero

Mutations in our brains as they develop in our time as fetuses: In a very recent issue of Science (Vol. 359: 550-555; 2 Feb. 2018), authors Taejeong Bae and others reported that we accumulate a lot of genetic mutations in our individual brain cells as we develop.   They found different mutations in each cell, and 200 to 400 of them, on average, accumulating over the age of the fetus. They looked for very basic types of mutations, changes of one DNA base for a different one, termed single-nucleotide variants.  (That is, they did not look at mutations that deleted or inserted stretches of DNA.) Nearly half of the mutations occurred in parts of our DNA related to brain function (vs. other organs, though the functions overlapped). Clearly, we still function with these changes in all our neurons.  Granted, many variations in DNA bases don’t affect a protein that the cells make (the genetic code is redundant – several different sets of three bases specify the same amino acid), or, for stretches of DNA that don’t make proteins but interact with genes to regulate their degree of expression, they many not change that regulatory function much.

Still, these mutations are quite abundant – 50 times more per cell than in our adult cells of the liver, colon, and intestine, and almost 1000 times more than in our germ cells (eggs and sperm).  Of course, the latter resistance to mutation is a good thing.  While mutations that don’t disable us or kill us are the source of our evolution of function, including our oversized brains themselves, too many mutations reduce our biological fitness.

The mutations in our young, developing brains resemble those in cancers.  The authors take this to indicate that these “normal” (my word) mutations are part of the background for cancer.  They attribute the high rate of mutation to oxidative stress and to a high rate of cell division (faster is sloppier in copying DNA, then) during the stages called pregastrulation and neurogenesis.

The variations between cells that must occur remind me of quips about the UNIX and Linux operating systems – everyone comes with a different version and claims they’re all equivalent.  I wonder how non-equivalances among neurons affect how we think.

The variations also remind me of the wonder that our extremely complex bodies with so many controls to go awry (hormones, nerves, enzyme complements, basic development) almost always function well or even very well.  I have to skip over the 2/3 or so of conceptions that lead to death of the embryo – some errors are just too big.  We’re the lucky ones.

Subjective time: does time seem faster as we get older?

Here’s a simple (simplistic?) argument that we experience time in logarithmic fashion.

Intro: When we were young, it seemed to take ages to get older – to the next grade in school, to the next stop on a long drive, to wait till Christmas or another holiday.  There are so many cliches about the change in the experience of time as we get older.

The math, with a few graphs: follow this link