test MT to image

Converting MathType equations to images:

Kutools converted very few equations to images!

Saving a docx as html had the same problem.

Then I simply copied any MT equation, pasted it in a docx like this one, and pasted it in with the 4th choice, as image!

The choice icon looks like a mountain scene!

Now check that this conversion to image does work. Put this document onto a webpage with Mammoth .docx conveter

  1. (in free space). Then

 

 

This is classic one-over-r-squared law for the falloff of power with distance. With R = radius of the Sun (0.696 million km) and R’ = mean radius of the Earth’s orbit (150 million km),

If we want a planet with an energy flux density that’s the same as for Earth (so that it has about the same temperature), we want the total power of the star spread out at the planet’s orbital distance to be like that for the Earth:

 

Test hi-res2

Sidebar. The Hertzsprung-Russell diagram of star temperature and luminosity

Stars vary dramatically in color and brightness:

Hubble Space Telescope image of the Sagittarius Star Cloud. The image shows many stars of various colors, white, blue, red and yellow spread over a black background. The most common star colors in this image are red and yellow.

Sagittarius Star Cluster. credit: Hubble Heritage Team (AURA/STScI/NASA

Untold numbers of observations of stars show distinctive regularities in their attributes. Many of the stars cluster along a line called the Main Sequence when their luminosity (to be defined shortly) is plotted against their temperature or associated color (more light in the blue waveband than in the visible equates to hotter).

In the early 1900s two astronomers independently developed an eponymous plot that shows this: Ejnar Hertzsprung in Denmark and Henry Norris Russell in the US:

R. Hollow, Commonwealth Scientific and Industrial Research Organization

R. Hollow, Commonwealth Scientific and Industrial Research Organization

A bit about the definition of luminosity used in the plot: As astronomers even before the era of CCD-cameras made their observations, they quantified the brightness of stars. At our point of observation, it is the flux of photons per area of whatever we use to catch the radiation – our eyes, a photographic plate, a CCD camera recording. This apparent luminosity can be converted to an absolute luminosity, accounting for stars being at various distances from us (see below). The absolute luminosity can be cited two ways:

  • A magnitude, with the star Vega as the starting point of magnitude 0. Every increase in magnitude is a decrease in luminosity of a factor of 2.512. This is a logarithmic scale, base 2.512. A difference of 5 magnitudes is a difference of a factor of 100. (Why this scale? Ask astronomer Norman Robert Pogson, or maybe not, since he died in 1891.) To keep in mind that higher numerical magnitude corresponds to lower luminosity, think of it as a ranking – 7th is lower than 2nd, as among tennis pros. Note that magnitudes need not be integers. They can be 2.3, 4.7, …;
  • A value relative to the luminosity of the Sun. The Sun is a wimpy magnitude-4.83 star. Sirius has magnitude 1.42. That’s 3.41 magnitudes higher, a factor of 23 in total output.

The physical origin of the tight pattern along the Main Sequence became clear as:

  • The process of nuclear fusion was discovered and characterized. These stars are in their early lives and are fusing hydrogen to helium as a main process. They’re in a common mode;
  • The variation of luminosity with simple distance from us could be corrected. A hot distant star might look less luminous than a cool nearby star. If we can measure the distance, r, we can compare stars as if they are all at a common distance, r0 (astronomers use 10 parsecs or 37 light-years). We may then multiply the apparent luminosity, a raw measure, by the factor (r/r0)2. This yields the defined absolute luminosity.

The physics, in brief: Going up and to the left we have stars that are hotter (therefore, bluer) and brighter, in a clear relation.

  • These stars have higher mass. As noted in the main text, they fuse hydrogen faster. They are hotter.
  • Stars largely radiate as blackbodies.
  • Blackbodies have peak emission at a wavelength that is inversely proportional to the temperature. For Sirius at a temperature of 9,940K, the peak is at 292 nm, in the “blue” band (really, the ultraviolet). For the Sun at 5800K, the peak is at 500 nm, in the yellow band. For our close relation, Proxima Centauri at 3042K, it is at 953 nm, in the red band (actually, the near infrared).
  • Blackbodies have total radiant energy output in proportion to absolute temperature to the fourth power, T4. Given the dynamics of hydrogen fusion, T rises roughly as mass to the 0.6 power; T4 then rises about as m2.4.
  • A second contribution to luminosity is the area of the star’s surface. It rises in approximate proportion to mass to the 1.2 power.
  • Thus, total radiated power – and resultant luminosity – rises nearly as m3.6. This omits the “clipping” of recorded radiation when it gets too short or too long in wavelength to be recorded in the detector.

All told, then, mass determines temperature and luminosity in these stars, in a tight relation.

What about the stars toward the top and right? While the Main Sequence is a sequence in mass and not in time. The Sun will not move to higher or lower mass while burning hydrogen, outside of a fraction of a percent from mass-to-energy conversion. Still, stars in later life can move off the Main Sequence. Stars 10 times the mass of the Sun or more start fusing helium, inflating and getting cooler but very much more luminous. An example is monstrous Betelgeuse. Such stars fuse to a core of iron, the most stable nuclide. They then explode as type II supernovae. Betelgeuse is ripe to do so, in perhaps as few as a thousand years by some estimates. Stars not quite as massive can blow off their outer layers to leave a hot, very dense, but low-luminosity white dwarf. Some massive stars leave enough mass intact to become those enigmatic neutron stars or even small black holes (the really big black holes are huge accumulations of many stellar masses in the centers of galaxies). Some neutron stars, the magnetars, have mind-boggling magnetic fields that contribute to emission of intensely powerful beams of X-rays and gamma rays. All these special stars came into our ken long after Hertzsprung and Russell made their diagram. There’s always something new under the Sun, as it were.

There are many more details in the paths by which stars evolve. There are many online and printed sources to follow this topic.

 

Test Word pix

Sidebar. The Hertzsprung-Russell diagram of star temperature and luminosity

Stars vary dramatically in color and brightness:

Hubble Space Telescope image of the Sagittarius Star Cloud. The image shows many stars of various colors, white, blue, red and yellow spread over a black background. The most common star colors in this image are red and yellow.

Sagittarius Star Cluster. credit: Hubble Heritage Team (AURA/STScI/NASA

Untold numbers of observations of stars show distinctive regularities in their attributes. Many of the stars cluster along a line called the Main Sequence when their luminosity (to be defined shortly) is plotted against their temperature or associated color (more light in the blue waveband than in the visible equates to hotter).

In the early 1900s two astronomers independently developed an eponymous plot that shows this: Ejnar Hertzsprung in Denmark and Henry Norris Russell in the US:

R. Hollow, Commonwealth Scientific and Industrial Research Organization

R. Hollow, Commonwealth Scientific and Industrial Research Organization

A bit about the definition of luminosity used in the plot: As astronomers even before the era of CCD-cameras made their observations, they quantified the brightness of stars. At our point of observation, it is the flux of photons per area of whatever we use to catch the radiation – our eyes, a photographic plate, a CCD camera recording. This apparent luminosity can be converted to an absolute luminosity, accounting for stars being at various distances from us (see below). The absolute luminosity can be cited two ways:

  • A magnitude, with the star Vega as the starting point of magnitude 0. Every increase in magnitude is a decrease in luminosity of a factor of 2.512. This is a logarithmic scale, base 2.512. A difference of 5 magnitudes is a difference of a factor of 100. (Why this scale? Ask astronomer Norman Robert Pogson, or maybe not, since he died in 1891.) To keep in mind that higher numerical magnitude corresponds to lower luminosity, think of it as a ranking – 7th is lower than 2nd, as among tennis pros. Note that magnitudes need not be integers. They can be 2.3, 4.7, …;
  • A value relative to the luminosity of the Sun. The Sun is a wimpy magnitude-4.83 star. Sirius has magnitude 1.42. That’s 3.41 magnitudes higher, a factor of 23 in total output.

The physical origin of the tight pattern along the Main Sequence became clear as:

  • The process of nuclear fusion was discovered and characterized. These stars are in their early lives and are fusing hydrogen to helium as a main process. They’re in a common mode;
  • The variation of luminosity with simple distance from us could be corrected. A hot distant star might look less luminous than a cool nearby star. If we can measure the distance, r, we can compare stars as if they are all at a common distance, r0 (astronomers use 10 parsecs or 37 light-years). We may then multiply the apparent luminosity, a raw measure, by the factor (r/r0)2. This yields the defined absolute luminosity.

The physics, in brief: Going up and to the left we have stars that are hotter (therefore, bluer) and brighter, in a clear relation.

  • These stars have higher mass. As noted in the main text, they fuse hydrogen faster. They are hotter.
  • Stars largely radiate as blackbodies.
  • Blackbodies have peak emission at a wavelength that is inversely proportional to the temperature. For Sirius at a temperature of 9,940K, the peak is at 292 nm, in the “blue” band (really, the ultraviolet). For the Sun at 5800K, the peak is at 500 nm, in the yellow band. For our close relation, Proxima Centauri at 3042K, it is at 953 nm, in the red band (actually, the near infrared).
  • Blackbodies have total radiant energy output in proportion to absolute temperature to the fourth power, T4. Given the dynamics of hydrogen fusion, T rises roughly as mass to the 0.6 power; T4 then rises about as m2.4.
  • A second contribution to luminosity is the area of the star’s surface. It rises in approximate proportion to mass to the 1.2 power.
  • Thus, total radiated power – and resultant luminosity – rises nearly as m3.6. This omits the “clipping” of recorded radiation when it gets too short or too long in wavelength to be recorded in the detector.

All told, then, mass determines temperature and luminosity in these stars, in a tight relation.

What about the stars toward the top and right? While the Main Sequence is a sequence in mass and not in time. The Sun will not move to higher or lower mass while burning hydrogen, outside of a fraction of a percent from mass-to-energy conversion. Still, stars in later life can move off the Main Sequence. Stars 10 times the mass of the Sun or more start fusing helium, inflating and getting cooler but very much more luminous. An example is monstrous Betelgeuse. Such stars fuse to a core of iron, the most stable nuclide. They then explode as type II supernovae. Betelgeuse is ripe to do so, in perhaps as few as a thousand years by some estimates. Stars not quite as massive can blow off their outer layers to leave a hot, very dense, but low-luminosity white dwarf. Some massive stars leave enough mass intact to become those enigmatic neutron stars or even small black holes (the really big black holes are huge accumulations of many stellar masses in the centers of galaxies). Some neutron stars, the magnetars, have mind-boggling magnetic fields that contribute to emission of intensely powerful beams of X-rays and gamma rays. All these special stars came into our ken long after Hertzsprung and Russell made their diagram. There’s always something new under the Sun, as it were.

There are many more details in the paths by which stars evolve. There are many online and printed sources to follow this topic.

 

Spinlaunch is demonstrably impossible

An investment in an impractical technology

Summary of the impracticality of Spinlaunch

The New Mexico Spaceport (https://www.spaceportamerica.com/), funded by taxpayers, started with Virgin Galactic’s space tourism entity as its anchor tenant. It has gained other tenants, thought not yet economically sustainable; space tourism may start in late 2019.

One new tenant is Spinlaunch, a company from Sunnyvale, California (http://www.spinlaunch.com/). They’ve raised $40M from investors (https://www.bloomberg.com/news/articles/2018-06-14/this-startup-got-40-million-to-build-a-space-catapult) including Google Ventures (now GV) and Airbus Ventures for a speculative technology, which I shall describe shortly below. They propose to use a large spinning platform to launch satellites from the ground (which must be with a rocket to complete the boost).

The idea sounded preposterous to me, so I worked out the limitations, which I claim are solidly against this being practical or even possible. Join me now:

The basic technology proposed is:

  • A vacuum chamber with a radius of about 50 m (Bill Gutman from Spaceport America let out the knowledge that my first estimate of 500 m was “an order of magnitude too high;” pushing on a completely unrealistic guess helped spring this information loose).
    • Bill says it’s patented, but there’s only a patent application dated July 2018, US 2018/0194496 Al to Jonathan Yaney.
    • I note that a patent says nothing about the practicality of an “invention.” Patent examiners are not allowed to decide on issuing a patent based on practicality. I note that Henry Latimer Simmons obtained patent 536,360 for a ludicrous invention to let one train pass over the top of another on one track.
  • Placing the satellite with its rocket motor on the periphery and spinning up to a tangential speed of Mach 4-5, as Bill cites. Yes, it could not be to LEO (Low Earth Orbit) speed; of course, the satellite would burn up on launch here in the lower atmosphere.
  • Upon launch a rocket engine ignites to reach the speed for attaining LEO.

Calculations:

  • I’ll take the lower speed, Mach 4, about 1,320 m s-1 to give the least stressful conditions.
  • At a radius r = 50 m and a speed v = 1320 m s-1, the centrifugal acceleration is very simply calculated as v2/r = 34,850 m s-2. That’s very closely 3,500 g! We’re talking about a satellite and its rocket engine withstanding this, including electronics
    • Bill Gutman says that there are already military projectiles that get accelerated to 40,000 to 50,000 g – to get a muzzle velocity of 1,000 m s-1 in a 10-m barrel. The electronics are potted to withstand the acceleration (https://www.raytheon.com/capabilities/products/excalibur).
    • Fine, but:
    • (1) A satellite has to have folded solar panels and antennae. These cannot be potted, and I cannot imagine any folding and cushioning that doesn’t destroy the joints or the panels. The military projectiles only have to deploy small vanes to steer. (I also don’t know how their performance meets specs.)
    • (2) To reach vLEO (calculations below), there has to be a rocket engine. It will have to be a solid propellant engine; the complex plumbing and pumps of a liquid-fueled engine could not possibly survive 3500 g.
    • This engine should really be two-stage. An effective vLEO of over 8,000 m s-1 is needed, with a bit of thrust vectoring to go from horizontal to tangential in the trajectory, as well as to overcome drag in the initial part of the trajectory. I get an estimate closer to 9200 m s-1, not achievable with one stage with solid propellant; see below. The exact calculation of air drag would similar to the math from interceptors such as Nike or the more modern (and low-effectiveness) GMD. So, the additional speed needed is well over 6,700 m s-1.
    • The classic rocket equation expresses the gain in speed (yes, let’s say speed, since direction is not specified and does change) is Δv = vex ln(m0/mf), where vex is the exhaust velocity as determined by the propellant type and m0 and mf are the initial and final masses of the rocket. I’ve written this up, too (https://science-technology-society.com/wp-content/uploads/2018/01/rocket_equation_in_free_space.pdf). We assume the loss of mass is that of propellant. Taking the final hull and payload (satellite) as having a mass of only 10% of the initial mass (90% burn), we get the logarithmic factor as ln(10)=2.3.
    • Solid propellants have only a moderate vex, hitting about 2,500 m s-1. We get Δv=5,750 m s-1. Yes, I’d say that a second stage is necessary.
    • (3) Can a solid-propellant rocket withstand the lateral acceleration? Of course, the rocket has to point up, so the rocket and payload are aligned perpendicularly to the radius. There is an enormous bending force exerted on the rocket body. The force also gets relieved almost instantaneously on launch, generating a change in acceleration called, appropriately, jerk. This sets parts of the launched item into sharp motion – like your innards if you’re in a high-speed traffic accident.
    • (4) How big a satellite can be launched, given materials limitations? There are some small satellites, e.g., the CubeSats, but they have economical and reliable launches already on standard rockets. For more practical sizes, I’m not about to do the engineering calculations to estimate the stresses on the launch platform and the safety factor. This assumes that the payload and its own rocket survive, which I flatly reject, as above. I note that:
    • (5) The whole idea was to save energy and cost in launching satellites. There’s a lot of energy put into the launch mechanism, far more than the kinetic energy imparted to the (putative) rocket + payload. Maybe some could be recovered in electromagnetic braking…needing a significant amount of electrical storage and circuits to handle massive currents.

There are other niceties:

  • Consider the extremely active timing needed to release the rocket + payload. Suppose we want a launch direction error not to exceed 1o, or 1/57 of a radian. To reach a tangential speed of 1320 m s-1 at a radius r = 50 m, one needs a rotation rate of 1320/50 = 26.4 radians s-1. That’s a bit over 1,500 degrees per second. The window is less than 1 ms wide.
  • Safety: What’s the shield in case the launch mechanism fails, sending out shard at high speed? How about releasing the rocket + payload nearly horizontally by accident? You need a BFS, a big functional shield.
  • How about the reaction of the spinning platform when the rocket + payload is released? That’s quite a jolt on the suspension. Maybe some engineers can address that, but not the fundamental no-gos (a neologism?) I’ve noted all through.

Conclusion:

  • This is a pipe dream or a scam, poorly thought out at the very best.
  • Yes, Google Ventures, now known as spinoff GV, and Airbus Ventures are among the investors in this. I can only attribute their lack of due diligence to their lack of sufficient technical expertise – I think Google and Airbus, both technically solid, spun off the MBAs and not the engineers into their Ventures.

The rest of the analysis uses equations set by MathType in Microsoft Word.  These don’t come through in the Mammoth docx converter plug-in to WordPress, so I link here to a PDF version of the rest of the analysis.

 

Adventures in light propagation – teaching and research

This multilayered post can be followed from one PDF document, or by the explicit links given below in the description of the whole study.  The post covers:

  • Teaching:
    • Working with students to make a light intensity detector using a photodiode.  It measures photon flux density in the visible portion of the electromagnetic spectrum.
    • We went on to use it as the detector in a (spectro)photometer for measuring the concentration of methylene blue dye illuminated with light from a high-intensity yellow LED.
  • Research:
    • The main point I just completed writing up is the use of radiative transport equations that I developed for estimating scattered light within a uniform canopy of plants.    The solutions for the fluxes of a direct beam and diffuse light together are analytic, in term of algebraic and exponential quantities.
    • The model also is useful for simulating the propagation of light inside leaves for modeling photosynthetic rates of leaves with different structures and pigmentations.  I have a number of publications on this (which I can link later, when I find PDFs of them.  One interesting prediction I made is that leaves with half-normal chlorophyll content should allow sharing of light with leaves deeper in a dense canopy, ultimately giving an 8% increase in biomass and yield.  John Hesketh’s group at the University of Illinois tested this in the field and got an 8% increase over fully green leaves! (Pettigrew WT, Hesketh JD, Peters DB, et al. (1989) Characterization of canopy photosynthesis of chlorophyll-deficient soybean isolines. Crop Science 29:1025-1029).
    • Recently (Nov.-Dec 2017) I extended it to multiple layers of different optical properties.  The challenge was testing it rigorously and making a comprehensive explanation with text and equations.

Back toward teaching: I wanted to verify that the photodiode circuit responded linearly to flux density.  I made a simple error in using layer scattering media too close to the detector, invalidating Beers’ law for the direct beam alone.  However, I then dove into the propagation of direct and diffuse light for its inherent interest.

The lead PDF document here has several sections:

  • The most recent inquiry: is the photodiode responding linearly to photon flux density?
  • The radiative transport equations
  • A few notes about extensions to nonuniform canopies

Within the lead document are links to several others.  The links are imbedded within the lead document; the links are also noted directly here:

  • A short write-up of the electronic circuit for the photodiode detector
  • Fixing-approximations.pdf:” A set of notes on improving a whole-canopy flux model (light, CO2 uptake and respiration, transpiration) in the representation of:
    • The enzymatic model of photosynthetic carbon fixation, bridging the cases of high light and low light
    • The equations for radiative transport, with their full derivation and some numerical results
    • A discussion of extensions of the model for leaves varying in absorptivity with depth in the canopy, or that are clumped, or that vary in temperature as they transpire water at different rates
    • In turn, this PDF references a short Fortran 90 program I wrote to solve the radiative transport equations
    • Also, a link to a 2013 publication I had with Zhuping Sheng, modeling all the fluxes of a pecan orchard.  The relevance is that I cited in this second PDF the modeling of light in a regular array of tilted, ellipsoidal canopies of individual trees.  Sheng did the experimental measurement of fluxes with eddy covariance equipment. I modeled the results, with one surprising finding that pecan trees, unlike every other plant I’ve studied, do not reduce its stomatal conductance and thus its transpiration in very dry conditions.  They operate at high transpiration rates and poor water-use efficiency in these conditions.
    • A couple of references to publications:
      • A model of radiative transport in layered plant canopies represented by finite layers (a finite-element model), as an integral equation that’s readily solved numerically.   I cite this publication because within it I discuss the changing angular distribution of diffuse light with depth.
      • The clever method of colleague and friend Michael Chelle and his former advisor Bruno Andrieu for radiative transport in an arbitrary assemblage of light-scatters.  The method is called nested radiosity, accounting essentially exactly for nearby scatterers affecting light at a given leaf and then via a nice smoothing approximation (mean field) for more distant scatterers.

 

Smart Water?

Posted 11 December 2017.

Smart water: ads

As our son, David reported reading: “If you’re paying $4 a bottle for smart water, it’s not working!”

Start with the cost.  Tap water averages about $2 per 1,000 gallons, which is enough to fill

Can it be any better than tap water?   A tiny bit, perhaps.   Regular tap water in almost all US water supplies is actually cleaner than most bottled water, according to independent labs.  Save money, save the landfill waste, save the petroleum used to make the plastic bottles!

(Eau du robinet!)

What could be improved in Smart Water, and how?

Distillation – removes dissolved solids…and SOME of the volatile organic compounds, SOME

Adding in minerals, selectively – why not drink the natural minerals in your tap water?

The makers avoid sodium – fine, in our salt-laden cuisine…but we get so little sodium from our water!

No fluoride – but you  need small amounts of fluoride (GO ON about fluorosis worry)

No heavy metals, arsenic, etc. – well, they’re in your food, unavoidably.  We have lived with U, Hg, etc. our whole 2 Mya as a species

No gluten – ridiculous!  Gluten only comes from wheat and barley, and I haven’t notice public works people tossing either into our water supplies!

What about water purity, in history?

Not a good record, until sanitation started big time in the mid-1800s

Reason to drink wine, beer (maybe! adulterated), strong spirits – why the temperance movement had a basis (along with the transport problem for grain from the US Midwest, e.g.)

Cholera spreads by contaminated water – English well XXXXX

In fact, broadening to sanitation, in general – it was the first major advance in human health!

For our water and food, and then in medicine – the sad story of Joseph Lister XXXX

GO out and thank an LC utility worker, a garbageman!

On, but if you’re a vegan, thank a little dirt in your food, for vitamin B12 …..