Friday, September 23, 2016

Is cloning of love possible?

In Facebook discussion with Bruno Marchal and Stephen King the notion of quantum cloning as copying of quantum state popped up and I ended up to ask about approximate cloning and got a nice link about which more below. From Wikipedia one learns some interesting facts cloning. No-cloning theorem states that the cloning of all states by unitary time evolution of the tensor product system is not possible. It is however possible clone orthogonal basis of states.

As a response to my question I got a link to an article of Lamourex et al showing that cloning of entanglement - to be distinguished from the cloning of quantum state - is not possible in the general case. Separability - the absence of entanglement - is not preserved. Approximate cloning generates necessarily some entanglement in this case, and the authors give a lower bound for the remaining entanglement in case of an unentangled state pair.

The impossibility of cloning of entanglement in the general case makes impossible the transfer of information as any kind of entanglement. Maximal entanglement - and maybe be even negentropic entanglement (NE), which is maximal in p-adic sectors - could however make the communication without damaging the information at the source. Since conscious information is in adelic physics associated with p-adic sectors responsible for cognition, one could even allow the modification of the entanglement probabilities and thus of the real entanglement entropy in the communication process since the maximal p-adic negentropy depends only weakly on the entanglement probabilities.

NE is assigned with conscious experiences with positive emotional coloring: experience of understanding, experience of love, etc... There is an old finnish saying, which can be translated to "Shared joy is double joy!". Could the cloning of NE make possible generation of entanglement by loving attitude so that living entities would not be mere thieves trying to steal NE by killing and eating each other?

For background see the chapter Negentropy Maximization Principle. See also the article Is the sum of p-adic negentropies equal to real entropy?.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, September 22, 2016

What happens to the extremals of Kähler action when volume term is introduced?

What happens to the extremals of Kähler action when volume term is introduced?

  1. The known non-vacuum extremals such as massless extremals (topological light rays) and cosmic strings are minimal surfaces so that they remain extremals and only the classical Noether charges receive an additional volume term. In particular, string tension is modified by the volume term. Homologically non-trivial cosmic strings are of form X2× Y2, where X2⊂ M4 is minimal surface and Y2⊂ CP2 is complex 2-surface and therefore also minimal surface.

  2. Vacuum degeneracy is in general lifted and only those vacuum extremals, which are minimal surfaces survive as extremals.

For CP2 type vacuum extremals the roles of M4 and CP2 are changed. M4 projection is light-like curve, and can be expressed as mk=fk(s) with light-likeness conditions reducing to Virasoro conditions. These surfaces are isometric to CP2 and have same Kähler and symplectic structures as CP2 itself. What is new as compared to GRT is that the induced metric has Euclidian signature. The interpretation is as lines of generalized scattering diagrams. The addition of the volume term forces the random light-like curve to be light-like geodesic and the action becomes the volume of CP2 in the normalization provided by cosmological constant. What looks strange is that the volume of any CP2 type vacuum extremals equals to CP2 volume but only the extremal with light-like geodesic as M4 projection is extremal of volume term.

Consider next vacuum extremals, which have vanishing induced Kähler form and are thus have CP2 projection belonging to at most 2-D Lagrangian manifold of CP2.

  1. Vacuum extremals with 2-D projections to CP2 and M4 are possible and are of form X2× Y2, X2 arbitrary 2-surface and Y2 a Lagrangian manifold. Volume term forces X2 to be a minimal surface and Y2 is Lagrangian minimal surface unless the minimal surface property destroys the Lagrangian character.

    If the Lagrangian sub-manifold is homologically trivial geodesic sphere, one obtains string like objects with string tension determined by the cosmological constant alone.

    Do more general 2-D Lagrangian minimal surfaces than geodesic sphere exist? For general Kähler manifold there are obstructions but for Kähler-Einstein manifolds such as CP2, these obstructions vanish (see this ). The case of CP2 is also discussed in the slides "On Lagrangian minimal surfaces on the complex projective plane" (see this). The discussion is very technical and demonstrates that Lagrangian minimal surfaces with all genera exist. In some cases these surfaces can be also lifted to twistor space of CP2.

  2. More general vacuum extremals have 4-D M4 projection. Could the minimal surface condition for 4-D M4 projection force a deformation spoiling the Lagrangian property? The physically motivated expectation is that string like objects give as deformations magnetic flux tubes for which string is thicknened so that it has a 2-D cross section. This would suggest that the deformations of string like objects X2× Y2, where Y2 is Lagrangian minimal surface, give rise to homologically trivial magnetic flux tubes. In this case Kähler magnetic field would vanish but the spinor connection of CP2 would give rise to induced magnetic field reducing to some U(1) subgroup of U(2). In particular, electromagnetic magnetic field could be present.

  3. p-Adically Λ behaves like 1/p as also string tension. Could hadronic string tension be understood also in terms of cosmological constant in hadronic p-adic length scale for strings if one assumes that cosmological constant for given space-time sheet is determined by its p-adic length scale?

The so called Maxwell phase which would correspond to small perturbations of M4 is also possible for 4-D Kähler action. For the twistor lift the volume term makes this phase possible. Maxwell phase is highly interesting since it corresponds to the intuitive view about what QFT limit of TGD could be.
  1. The field equations are a generalization of massless field equations for fields identifiable as CP2 coordinates and with a coupling to the deviation of the induced metric from M4 metric. It representes very weak perturbation. Hence the linearized field equations are expected to be an excellent approximation. The general challenge would be however the construction of exact solutions. One should also understand the conditions defining preferred extremals and stating that most of symplectic Noether charges vanish at the ends of space-time surface about boundaries of CD.

  2. Maxwell phase is the TGD analog for the perturbative phase of gauge theories. The smallness of the cosmological constant in cosmic length scales would make the perturbative approach useless in the path integral formulation. In TGD approach the path integral is replaced by functional integral involving also a phase but also now the small value of cosmological constant is a problem in long length scales. As proposed, the hierarchy of Planck constants would provide the solution to the problem.

  3. The value of cosmological constant behaving like Λ ∝ 1/p as the function of p-adic prime could be in short p-adic length scales large enough to allow a converging perturbative expansion in Maxwellian phase. This would conform with the idea that Planck constant has its ordinary value in short p-adic length scales.

  4. Does Maxwell phase allow extremals for which the CP2 projection is 2-D Lagrangian manifold - say a perturbation of a minimal Lagrangian manifold? This perturbation could be seen also as an alternative view about thickened minimal Lagrangian string allowing also M4 coordinates as local coordinates. If the projection is homologically trivial geodesic sphere this is the case. Note that solutions representable as maps M4→ CP2 are also possible for homologically non-trivial geodesic sphere and involve now also the induced Kähler form.

  5. The simplest deformations of canonically imbedded M4 are of form Φ= k• m, where Φ is an angle coordinate of geodesic sphere. The induced metric in M4 coordinates reads as gkl= mkl-R2kkkl and is flat and in suitably scaled space-time coordinates reduces to Minkowski metric or its Euclidian counterpart. kk is proportional to classical four-momentum assignable to the dark energy. The four-momentum is given by

    Pk = A× hbar kk ,

    A=[Vol(X3)/L4Λ] × (1+2x/1+x) ,

    x= R2k2 .

    Here kk is dimensionless since the the coordinates mk are regarded as dimensionless.

  6. There are interesting questions related to the singularities forced by the compactness of CP2. Eguchi-Hanson coordinates (r,θ,Φ,Ψ) (see this) allow to get grasp about what could happen.

    For the cyclic coordinates Ψ and Φ periodicity conditions allow to get rid of singularities. One can however have n-fold coverings of M4 also now.

    (r,θ) correspond to canonical momentum type canonical coordinates. Both of them correspond to angle variables (r/(1+r2)1/2 is essentially sine function). It is convenient to express the solution in terms of trigonometric functions of these angle variables. The value of the trigonometric function can go out of its range [-1,1] at certain 3-surface so that the solution ceases to be well-defined. The intersections of these surfaces for r and θ are 2-D surfaces. Many-sheeted space-time suggests a possible manner to circumvent the problem by gluing two solutions along the 3-D surfaces at which the singularities for either variable appear. These surfaces could also correspond to the ends of the space-time surface at the boundaries of CD or to the light-like orbits of the partonic 2-surfaces.

    Could string world sheets and partonic 2-surfaces correspond to the singular 2-surfaces at which both angle variables go out of their allowed range. If so, 2-D singularities would code for data as assumed in strong form of holography (SH). SH brings strongly in mind analytic functions for which also singularities code for the data. Quaternionic analyticity which makes sense would indeed suggest that co-dimension 2 singularities code for the functions in absence of 3-D counterpart of cuts (light-like 3-surfaces?)

  7. A more general picture might look like follows. Basic objects come in two classes. Surfaces X2× Y2, for which Y2 is either homologically non-trivial complex minimal 2-surface of CP2 of Lagrangian minimal surface. The perturbations of these two surfaces would also produce preferred extremals, which look locally like perturbations of M4. Quaternionic analyticity might be shared by both solution types. Singularities force many-sheetedness and strong form of holography.

Cosmological constant is expected to obey p-adic evolution and in very early cosmology the volume term becomes large. What are the implications for the vacuum extremals representing Robertson-Walker metrics having arbitrary 1-D CP2 projection?
  1. The TGD inspired cosmology involves primordial phase during a gas of cosmic strings in M4 with 2-D M4 projection dominates. The value of cosmological constant at that period could be fixed from the condition that homologically trivial and non-trivial cosmic strings have the same value of string tension. After this period follows the analog of inflationary period when cosmic strings condense are the emerging 4-D space-time surfaces with 4-D M4 projection and the M4 projections of cosmic strings are thickened. A fractal structure with cosmic strings topologically condensed at thicker cosmic strings suggests itself.

  2. GRT cosmology is obtained as an approximation of the many-sheeted cosmology as the sheets of the many-sheeted space-time are replaced with region of M4, whose metric is replaced with Minkowski metric plus the sum of deformations from Minkowski metric for the sheet. The vacuum extremals with 4-D M4 projection and arbitrary 1-D projection could serve as an approximation for this GRT cosmology. Note however that this representability is not required by basic principles.

  3. For cosmological solutions with 1-D CP2 projection minimal surface property forces the CP2 projection to belong to a geodesic circle S1. Denote the angle coordinate of S1 by Φ and its radius by R. For the future directed light-cone M4+ use the Robertson-Walker coordinates (a=(m02-rM2)1/2, r=arM, θ, φ), where (m0, rM, θ, φ) are spherical Minkowski coordinates. The metric of M4+ is that of empty cosmology and given by ds2 = da2-a22, where Ω2 denotes the line element of hyperbolic 3-space identifiable as the surface a=constant.

    One can can write the ansatz as a map from M4+ to S1 given by Φ= f(a) . One has gaa=1→ gaa= 1-R2(df/da)2. The field equations are minimal surface equations and the only non-trivial equation is associated with Φ and reads d2f/da2=0 giving Φ= ω a, where ω is analogous to angular velocity. The metric corresponds to a cosmology for which mass density goes as 1/a2 and the gravitational mass of comoving volume (in GRT sense) behaves is proportional to a and vanishes at the limit of Big Bang smoothed to "Silent whisper amplified to rather big bang for the critical cosmology for which the 3-curvature vanishes. This cosmology is proposed to results at the limit when the cosmic temperature approaches Hagedorn temperature.

  4. The TGD counterpart for inflationary cosmology corresponds to a cosmology for which CP2 projection is homologically trivial geodesic sphere S2 (presumably also more general Lagrangian (minimal) manifolds are allowed). This cosmology is vacuum extremal of Kähler action. The metric is unique apart from a parameter defining the duration of this period serving as the TGD counterpart for inflationary period during which the gas of string like objects condensed at space-time surfaces with 4-D M4 projection. This cosmology could serve as an approximate representation for the corresponding GRT cosmology.

    The form of this solution is completely fixed from the condition that the induced metric of a=constant section is transformed from hyperbolic metric to Euclidian metric. It should be easy to check whether this condition is consistent with the minimal surface property.

See the chapter From Principles to diagrams of "Towards M-Matrix" or the article How the hierarchy of Planck constants might relate to the almost vacuum degeneracy for twistor lift of TGD?.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Tuesday, September 20, 2016

175 year old battery still working, Pollack's EZs, cold fusion, self-loading batteries, membrane potential, and nerve pulse

This posting was born as a continuation to the earlier posting inspired by very stimulating discussions with Tapani Alasaarela during weekend forcing to update my rather fragmentary knowledge about electrochemistry. I add the copy of the continued posting also here because it means a breathrough in the understanding of cell membrane as the analog of self-loading battery and generalised Josephson junction and allows also to understand what happens in the generation of nervepulse.

Elemer Rosinger had a Facebook link to an article telling about Clarendon dry pile, a very long-lived battery providing energy for an electric clock (see this, this, and this ). This clock known also as Oxford bell has been ringing for 175 years now and the article suggests that the longevity of the battery is not really understood. The bell is not actually ringing so loud that human ear could hear it but one can see the motion of the small metal sphere between the oppositely charged electrodes of the battery in the video.

The principle of the clock is simple. The gravitational field of earth is also present. When the sphere touches the negative electrode, it receives a bunch of electrons and gives the bunch away as it touches positive electrode so that a current consisting of these bunches is running between electrons. The average current during the oscillation period of 2 seconds is nanoampere so that nanocoulomb of charge is transferred during each period (Coulomb corresponds to a 6.242 × 1018 elementary charges (electrons)).

The dry pile was discovered by priest and physicist Giuseppe Zamboni at 1812. The pile consists of 2,000 pairs of pairs of discs of tin foil glued to paper impregnated with Zinc sulphate and coated on the other side with manganese dioxide: 2,000 thin batteries in series. The operation of battery gradually leads to the oxidation of Zinc and the loss of manganese dioxide but the process takes place very slowly. One might actually wonder whether it takes place too slowly so that some other source of energy than the electrostatic energy of the battery would be keep the clock running. Karpen pile is analogous battery discover by Vasily Karpen. It has now worked for 50 years.

Cold fusion is associated with electrolysis. Could the functioning of this mystery clock involve cold fusion taken seriously even by American Physical Society thanks to the work of the group of prof. Holmlid. Electrolytes have of course been "understood" for aeons. Ionization leads to charge separation and current flows in the resulting voltage. With a feeling of deep shame I must confess that I cannot understand how the ionization is possible in standard physics. This of course might be just my immense stupidity - every second year physics student would immediately tell that this is "trivial" - so trivial that he would not even bother to explain why. The electric field between the electrodes is immensely weak in the scale of molecules. How can it induce the ionisation? Could ordinary electrolytes involve new physics involving cold fusion liberating energy? These are the questions, which pop up in my stupid mind. Stubborn as I am in my delusions, I have proposed what this new physics might be with inspiration coming from strange experimental findings of Gerald Pollack, cold fusion, and my own view about dark matter has phases of ordinary matter with non-standard value heff=n× h of Planck constant. Continuing with my weird delusions I dare ask: Could cold fusion provide the energy for the "miracle" battery?

To understand what might be involved one must first learn some basic concepts. I am trying to do the same.

  1. Battery consistes of two distinct electrochemical cells. Cell consists of electrode and electrolyte. The electrodes are called anode and catode. By definition electron current along external wire flows to catode and leaves anode.

  2. There are also ionic currents flowing inside the battery. In absence of the ionic currents the electrodes of the battery lose their charge. In the loading the electrodes get their charges. In the ideal situation the ionic current is same as electron current and the battery does not lose its charging. Chemical reactions are however taking place near and at the electrodes and in their reversals take place during charging. Chemical changes are not completely reversible so that the lifetime of the battery is finite.

    The ionic current can be rather complex: the carriers of the positive charge from anode can even change during the charge transfer: what matters that negative charge from catode is transferred to anode in some manner and this charge logistics can involve several steps. Near the catode the currents of positive ions (cations) and electrons from the anode combine to form neutral molecules. The negative current carriers from catode to the anode are called anions.

  3. The charge of the clectrochemical cell is in the electrolyte near the surface of the electrode rather than inside it as one might first think and the chemical processes involve neutralization of ion and the transfer of neutral outcome to or from the electrode.

  4. Catode - or better, the electrochemical cell containing the catode - can have both signs of charge. For positive charge one has a battery liberating energy as the electron current connecting the negative and positive poles goes through the load, such as LED. For negative charge current flows only if there is external energy feed: this is loading of the battery. External voltage source and thus energy is needed to drive the negative charges and positive charges to the electrodes. The chemical reactions involved can be rather complex and proceed in reverse direction during the loading process. Travel phone battery is a familiar example.

    During charging the roles of the anode and catode are changed: understanding this helps considerably.

Could cold fusion help to understand why the Clarendon dry pile is so long lived?
  1. The battery is series of very many simpler batteries. The mechanism should reduce to the level of single
    building brick. This is assumed in the following.

  2. The charge of the battery tends to be reduced unless the ionic and electronic currents are identical. Also
    chemical changes occur. The mechanism involved should oppose the reduction of the charging by creating positive charge to the catode and negative charge to the anode or induce additional voltage between the electrodes of the battery inducing its loading. The energy feed involved might also change the direction of the basic chemical reactions as in the ordinary loading by raising the temperature at catode or anode.

  3. Could be formation of Pollack's exclusion zones (EZs) in the elecrolytic cell containing the anode help to achieve this? EZs carry a high electronic charge. According to TGD based model protons are transformed to dark protons at magnetic flux tubes. If the positive dark charge at the flux tubes is transferred to the electrolytic cell containing catode and transformed to ordinary charge, it would increase the positive charge of the catode. The effect would be analogous to the loading of battery. The energy liberated in the process would compensate for the loss of charge energy due to electronic and ionic currents.

  4. In the ordinary loading of the battery the voltage between batteries induces the reversal of the chemical processes occuring in the battery. This is due to the external energy feed. Could the energy feed from dark cold fusion induce similar effects now? For instance, could the energy liberated at the catode as positively charged dark nuclei transform to ordinary ones raise the temperature and in this manner feed the energy needed to change the direction of the chemical reactions.

This model might have an interesting application to the physics of cell membrane.
  1. Cell membrane consisting of two lipid layers defines the analog of a battery. Cell interior plus inner lipid layer (anode) and cell exterior plus outer lipid layer (catode) are analogs of electrolyte cells.

    What has been troubling me for two decades is how this battery manages to load itself. Metabolic energy is certainly needed and ADP-ATP mechanism is essential element. I do not however understand how the membrane manages to keep its voltage.

    Second mystery is why it is hyperpolarization rather than polarization, which tends to stabilize the membrane potential in the sense that the probability for the spontaneous generation of nerve pulse is reduced. Neither do I understand why depolarization (reduction of the membrane voltage) leads to a generation of nerve pulse involving rapid change of the sign of the membrane voltage and the flow of various ionic currents between the interior and exterior of the cell.

  2. In the TGD inspired model for nerve pulse cell interior and cell exterior or at least their regions near to lipid layers are regarded as super-conductors forming a generalized Josephson junction. For the ordinary Josephson junction the Coulombic energy due to the membrane voltage defines Josephson energy. Now Josephson energy is replaced by the ordinary Josephson energy plus the difference of cyclotron energies of the ion at the two sides of the membrane. Also ordinary Josephson radiation can be generated. The Josephson currents are assumed to run along magnetic flux tubes connecting cell interior and exterior. This assumption receives support from the strange finding that the small quantal currents associated with the membrane remain essentially the same when the membrane is replaced with polymer membrane.

  3. The model for Clarendon dry pile suggests an explanation for the self-loading ability. The electrolytic cell containing the anode corresponds to the negatively charged cell interior, where Pollack's EZs would be generated spontaneously and the feed of protonic charge to the outside of the membrane would be along flux tubes as dark protons to minimize dissipation. Also ions would flow along them. The dark protons driven to the outside of the membrane transform to ordinary ones or remain dark and flow spontaneously back and provide the energy needed to add phosphate to ADP to get ATP.

  4. The system could be quantum critical in the sense that a small reduction of the membrane potential induces nerve pulse. Why the ability to generate Pollack's EZs in the interior would be lost for a few milliseconds during nerve pulse? The hint comes from the fact that Pollack's EZs can be generated by feeding infrared radiation to a water bounded by gel. Also the ordinary Josephson radiation generated by cell membrane Josephson junction has energy in infrared range!

    Could the ordinary Josephson radiation generate EZs by inducing the ionization of almost ionized hydrogen bonded pairs of water molecules. The hydrogen bonded pairs must be very near to the ionization energy so that ordinary Josephson energy of about .06 eV assignable to the membrane voltage is enough to induce the ionization followed by the formation of H3/2O. The resulting EZ would consist of layers with the effective stoichiometry H3/2O.

    As the membrane voltage is reduced, Josephson energy would not be anymore enough to induce the ionization of hydrogen bonded pair of water molecules, EZs are not generated, and the battery voltage is rapidly reduced: nerve pulse is created. In the case of hyperpolarization the energy excees the energy needed for ionization and the situation becomes more stable.

See the chapter Cold fusion again of "Hyper-finite factors, p-adic length scale hypothesis, and dark matter hierarchy". See also the article with the same title and the article Could Pollack effect make cell membrane a self-loading battery?.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, September 19, 2016

Comments about Ben Goertzel's ideas related to p-adic physics

The following comments are inspired by Ben Goertzel's blog post Wild-ass shit: P-adic physics, p-adic complex probabilities and the Eurycosm. The reading of Ben's posting helps considerably in understanding of my comments.

Before continuing a pedantic remark (as a theory builder I keep watch that the one and only interpretation prevails!;-)). p-Adic physics is important in TGD but it does not replace real number based physics. Real physics would still describe the sensory world. p-Adic physics would take care of cognition. All of this would be integrated to adelic physics of sensory experience and cognition.

p-Adic physics at space-time level

The idea about cognitive representation would be realized at the fundamental level, even for elementary particles. And also at space-time level: real and p-adic space-time surfaces are in some sense completions of algebraic 2-D objects to 4-D ones. p-Adic space-time sheets would serve as correlates for cognitive, real ones for sensory.

p-Adic space-time surfaces would be very closely related to real space-time surfaces. They would have a set of common points with preferred imbedding space coordinates in extension of rationals and by strong form of holography (SH) they would be obtained from string world sheets and partonic 2-surfaces (in the following just "2-surfaces") as preferred extremal and highly unique in real sector: one has effective 2-dimensionality. The 2-surfaces would be number theoretically universal in the sense that the parameters characterizing them - say coefficients appearing in polynomials defining rational functions, would be in algebraic extension of rationals and make sense for reals and for corresponding extension of p-adics for every p.

Entanglement would be number theoretically universal. Even in strong sense so that superpositions coefficients of Hilbert space states would have interpretation in all number fields, reals and p-adics. Entanglement entropy for this entanglement would make sense in both real and various p-adic senses. In real sector it would measure the ignorance of the outsider about the state of system, in p-adic sector the conscious information of system about itself.

p-Adic uncertainty

The notion of p-adic uncertainty discussed by Ben Goertzel is very interesting. In TGD framework it would relate to cognitive uncertainty. Here the non-well ordered character of p-adic numbers is the key notion: p-adic numbers with same real number valued norm (power of p) cannot be well-ordered using p-adic norm alone: p-adic interval has no boundaries meaning problems with definite integral, which give an extremely powerful guideline in attempts to define p-adic physics and fuse them with real physics.

  1. One can refine the notion of resolution. Instead of mapping p-adic number to its norm one maps by what I call canonical identification (or some of its variants) to real number by formula ∑ xnpn→∑ xnp-n. The map is continuous and one can also perform pinary cut at right hand side so that this is like taking N first pinary digits as significant digits. Very natural.

  2. Second aspect of the p-adic uncertainty relates to the fact that the notion of angle as a purely p-adic notion fails. Trigonometric and exponential functions can be defined by same formulas as in real case but they are not periodic and exponential function does not converge. The only cure that I know is to consider algebraic extensions induced by those of rationals. In particular, root of unity values of imaginary exponents for allowed angles as m×2π/n. Angles - or rather corresponding trigonometric functions, are discretized. Same happens for hyperbolic angles due to the completely unique feature of e: ep is ordinary p-adic number.

    This number theoretic analog of Uncertainty Principle (only discrete set of phases exists p-adically but not the corresponding angles) seems to have nothing to do with Uncertainty Principle but better to be cautious. One ends up with the notion of p-adic geometry as a kind of collection of Leibniz monads labelled by points, which are number theoretically universal and correspond to algebraic extension of rationals. Each monad is in p-adic sectors p-adic continuum and differential calculus and field equations make sense. In the real sector the notion of manifold is replaced with its number theoretic version involving also the discretization as labels for manifold charts. Riemann geometry would have finite measurement resolution.

    The discrete spine of the space-time surface consisting of algebraic points would help to algebraically continue
    the 2-surfaces to space-time surface as preferred extremal and give additional conditions. In p-adic sectors p-adic pseudo constants (non-vanishing functions with vanishing derivative) would make the continuation easy: it is easy to imagine. In real sector the continuation need not be possible always: all that is imaginable is not realizable.

  3. Inclusions of hyperfinite factors provide a third view about measurement uncertainty. This would also be a number theoretically universal notion. Hyperfinite factors define infinite-D spaces for which finite-D approximation can be made arbitrarily precise and this strongly suggests at connection with p-adicity with pinary digits can be classified by their significance and ultrametricity holds.

p-Adic variants of lattices (Knut and Skilling)

Any lattice with basis vectors whose components belong to algebraic extension of rationals makes sense also p-adically in corresponding extension of p-adic numbers. One would have p-adic integer multiples of the basic vectors and in p-adic sense it would be a continuum and one could do p-adic differential calculus. Definite integral is the problem and has led me to the notion of p-adic/adelic monadology allowing both smooth physics (field equations) and discretization in terms of finite measurement resolution. Concerning discrete data processing - about which I know very little - the possibility to bring in p-adic differential calculus as a tool might be interesting.

p-Adic probability

Here the problem is that Hilbert space with p-adic coefficients allows zero norm states since SUM x_n^2 can vanish. The manner to get rid of this problem is number theoretic universality. The coefficients x_n belong to algebraic extension of rationals and make sense in the induced extensions of p-adics for all primes. This might look like a technical detail but is of fundamental importance. Also from the point of view of cognitive consciousness. One also ends up with the notion of negentropic entanglement in p-adic sectors of the adelic Universe.

Ultrametricity

Ultrametricity (UM) emerged originally from modelling of spin glass energy landscape having fractal structure: valleys inside valleys. In TGD the huge vacuum degeneracy of K&aml;hler action inspired the view that small deformations of the vacuum extremals can be regarded as 4-D version of spin glass energy landscape. Twistor lift of TGD strongly suggests that this degeneracy is lifted by cosmological constant (extremely small in recent cosmology) and is is plausible that the landscape remains and p-adic physics for various values of p would be natural manner to characterize the landscape at the level of "World of Classical Worlds" - the TGD analog for the world of euryphysics of Goertzel.

The observation about distribution of distances among sparse vectors ultrametricity (see this) is very interesting. I confess that I do not understand exactly how it comes out since I have no intuition about data representations.

What is however interesting is following.

  1. In very high-dimensional situations the notion of nearest neighbor must be defined modulo resolution. This brings in mind p-adicity and discretization. There is no point to keep well-ordering when measurement resolution is poor.

  2. According to the reference, the notions of metric distance and Riemannian geometry might not be useful in very high dimensions. This in turn suggest that the notion of the monadic/adelic geometry and discretized distances associated with it might be more useful approach. There is also a connection with hierarchies mentioned to make possible the unexpected effectiveness of deep learning. Associated with a given p-adic number field there is hierarchy of measurement resolutions (powers of p), p-adic number fields form a hierarchy, and there are sub-hierarchies corresponding to primes near powers of some prime, in particular 2. This fits nicely to the idea about p-adic physics as physics of cognition.

  3. There might be a deep connection with inclusions of hyperfinite factors and p-adic description of these sparse data sets. Finite resolution makes the notion of nearest neighbor obsolete for distances smaller than the resolution and therefore p-adicization, which does not allow well-ordering of points below the resolution would be natural.

  4. One could also start directly from the ultrametric distance function defined in the space of data points. Define some height function (energy is good metaphor for it) in the data space and define the distance between valleys A and B (minima of height function) along given path from A to B as maximum of height function at that path: that is as the height of the highest mountain which you must climb over. The distance from A to B is the distance along the path for which this distance is minimal. The challenge would be to assign some kind of energy or action to a data point. Can one imagine some universal height function? Could it correspond to some kind of physical energy or its exponent maybe characterizing the probability for the occurrence of the data point as analog of Boltzmann weight. p-Adic thermodynamics for data points?

    Example: in chemical reaction kinetics this distance would correspond to the minimum of energy for intermediate states leading from configuration A to B.

Algebraic extensions of rationals and data processing

Algebraic extensions have some algebraic dimension. Could one perform data processing by mapping discretized real spaces of arbitrary dimension to these extensions, which are also complex. Arbitrarily high-D structures would be mapped to a subset of complex numbers and endowed with natural p-adic topology and metric. These structures interpreted in terms of 2-D string world sheets and partonic 2-surfaces could be lifted also to space-time level by SH : one would have representation of data at space-time level!

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Saturday, September 10, 2016

The world is really simple: even neural nets are able to model it!

The good news of this morning! Neural networks (one speaks of deep learning) model the world much better than one might expect on basis of mathematical arguments alone. This is a real mystery. Deep learning means AI systems with large number of hierarchy levels: programs calling programs calling... is the first intuitive idea of AI outsider like me.

The solution of the puzzle proposed by physicists is elegant. The physical world is much much simpler than mathematicians - wanting to be as general as possible - assume! Simplicity means among other things holography and hierarchical structures and deep learning relies on hierarchical structures. It would be amazing if AI and physics finally could meet each other! For more details see the article of Lin and Tegmark . See also the remarks of Ben Goertzel.

Holography and its strong form

The Universe is indeed very simple according to hololographic theories. For instance, in TGD not only holography but strong form of holography holds true. The quantum and classical data assignable to string world sheets and partonic 2-surfaces dictates the dynamics of 4-D space-time surface. This effective 2-dimensionality of dynamics means enormous simplification of the quantum physical world from what it could be. For instance, preferred extremals defining space-time surfaces satisfy infinite number of conditions stating vanishing of certain Noether charges.

This extreme simplicity is lost when the sheets of the many-sheeted are lumped together to obtain the space-time of general relativity and standard model and effective classical fields are sums over geometrizes classical fields associated with the sheets. In biological systems however the dynamics of many-sheetedness comes manifest and the actions of single sheet need not be masked: things get simple in this kind of situation.

Various fractal hierarchies

Holography need not the only reason for the simplicity. The possibly physical world of TGD has hierarchical fractal structure: length scale reductionism is replaced with fractality. Dynamics looks more or less similar in all zooms and this simplifies the situation of mimicker enormously. There are hierarchies of space-time sheets topologically condensed on larger space-time sheets, hierarchy of p-adic length scales defined by primes near powers of two (or more general small prime), hierarchy of Planck constants, self hierarchy. p-Adic length scale hierarchy allows extremely simple model for elementary particle masses: one might perhaps say that one does not model the mass of "real" particle but its cognitive representation about itself in terms of p-adic thermodynamics relying on conformal invariance. The hierarchy of Planck constants means fractal hierarchy of zoom-ups of system: dark matter phases assignable to quantum criticality would be crucial for the understanding of living systems.

These hierarchies also define hierarchies of measurement resolutions making possible abstraction, getting rid of details at the level of conscious experience and behavior. The hierarchical structure would be especially important for conscious mind. Self has subselves which it experiences as mental images and is mental image of higher level self. Goal hierarchies mean a lot of structural restrictions making it easier for artificial intelligence to mimick conscious systems.

Quantum states realize finite measurement resolution themselves

Conceptualization means hierarchies and one can say that TGD Universe performs this conceptualization for us! In fact, one can say that quantum state provides its own description. This implies that finite measurement resolution is not a property of description of quantum state but of quantum state itself! For instance, the larger the number of partonic 2-surfaces and string world sheets is, the better the "half-discretization" of 4-D space-time surface by these 2-surfaces is, and the more precise is the conscious experience of system about itself. For instance, magnetic flux tube networks with flux tubes accompanied by strings and with maximally entangled at the ends of nodes would give rise to a universal proprioception. The experience about 3-space would emerge from entanglement rather, not the 3-space as some colleagues fashionably argue.

Simplicity in cosmology

This extreme simplicity is most dramatic in cosmology. The microwave temperature is essentially constant. This cannot be due to the causal interactions but reflects something deeper. Inflationary scenarios are one attempt to explain this but have not led to a breakthrough. A more radical explanation is that macroscopic quantum coherence even in cosmological scales is possible at the space-time sheets of cosmic size scale with large value of Planck constant characterizing phases of ordinary matter behaving like dark matter. The key idea is generalization of point-like particle to 3-surface: particle and 3-space are one and same thing. Particles as 3-surfaces can have even cosmological size.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Does GW150914 force to modify the views about formation of binary blackhole systems?

The considerations below were inspired by a popular article related to the discovery of gravitational radiation in the formation of blackhole from two unexpectedly massive blackholes.

LIGO has hitherto detected two events in which the formation of blackhole as fusion of two blackholes has generated a detectable burst of gravitational radiation. The expected masses for the stars of the binary are typically around 10 solar masses. The later event involve a pair with masses of 8 and 14 solar masses marginally consistent with the expectation. The first event GW150914 involves masses of about 30 solar masses. This looks like a problem since blackhole formation is believed to be preceded via a formation of a red super giant and supernova and in this events star loses a large fraction of its mass.

The standard story evolution of binary to a pair of blackholes would go as follows.

  1. In the beginning the stars involved have masses in the range 10-30 solar masses. The first star runs out of the hydrogen fuel in its core and starts to burn hydrogen around the helium core. In this step it puffs up much of the hydrogen at its surface layers forming a red supergiant. The nuclear fusion proceeds in the core until iron core is formed and fusion cannot continue anymore. The first star collapses to a super nova and a lot of mass is thrown out (conservation of momentum forces this).

  2. Second star sucks much of the hydrogen after the formation of red supergiant. The core of the first star eventually collapses into a black hole. The stars gradually end end up close to each other. As the second star turns into a supergiant it engulfs its companion inside a common hydrogen envelope. The stars end up even closer to each other and the envelope is lost into space. Eventually the core of also second star collapses into a black hole. The two black holes finally merge together. The model predicts that due to the mass losses the masses of companions of the binary are not much higher than 10 solar masses. This is the problem.

Selma de Mink ( has proposed a new kind of story about the formation of blackholes from the stars of a binary.
  1. The story begins with two very massive stars rotating around each other extremely rapidly and so close together than they become tidally locked. They are like tango dancers. Both dancers would spin around their own axis in the same direction as they spin with respect to each other. This spinning would stir the stars and make them homogenous. Nuclear fusion would continue in the entire volume of the star rather in the core only. Stars would never run out of fuel and throw away they hydrogen layers. Therefore the resulting blackhole would be much more massive. This story would apply only to binaries.

  2. The simulations of the homogenous model however have difficulties with more conventional binaries such as the blackhole of the second LIGO signal. Second problem is that the blackholes forming GW150914 have very low spins if any. The proposed explanation would in terms of dance metaphor.

    Strong magnetic fields are present forcing the matter to flow near to the magnetic poles. The effect would be similar to that when figure skater stretches her arms to increase the moment of inertia in spin direction so that the spinning rate slows down by angular momentum conservation. This requires that the direction of the dipole differs from the axis of rotation considerably. Otherwise the spinning rate increases since moment of inertia is reduced: this is how the dancer develops the pirouette. The naive expectation is that the directions of the magnetic and rotation axis are near to each other.

What kind of story would TGD suggest? The basic ingredients of TGD story can be found in the article about LIGO discovery. Also the sections about the role of dark matter ant the magnetic flux tubes in the twistor lift of TGD might be helpful.
  1. The additional actor in this story is dark matter identified as large heff=hgr phases with hbargr=GMm/v0, where v0/c< 1 has dimensions of velocity: (c=1 is assumed for convenience) (see this). M is the large mass and m a small mass, say mass of elementary particle. The parameter v0 could be proportional to a typical rotational velocity in the system with universal coefficient.

    The crucial point is that the gravitational Compton length Λgr= hbargr/m= GM/v0 of the particle does not depend on its mass and for v0<c/2 is larger than Schwartschild radius rS= 2GM. For v0>c/2 the dark particles can reside inside blackhole.

  2. Could dark matter be involved with the formation of very massive blackholes in TGD framework? In particular, could the transformation of dark matter to ordinary matter devoured by the blackhole or ending as such to blackhole as such help to explain the large mass of GW150914?

    I have written already earlier about a related problem. If dark matter were sucked by blackholes the amount of dark matter should be much smaller in the recent Universe and it would look very different. TGD inspired proposal is that the dark matter is dark in TGD sense and has large value of Planck constant heff=n× h =hgr implying that the dark Compton length for particle with mass m is given by Λ= hbargr/m= GM/v0=rS/2v0. Λgr is larger than the value of blackhole horizon radius for v0/c<1/2 so that the dark matter remains outside the blackhole unless it suffers a phase transition to ordinary matter.

    For v0/c>1/2 dark matter can be regarded as being inside blackhole or having transformed to ordinary matter. Also the ordinary matter inside rS could transform to dark matter. For v0/c =1/2 for which Λ=rS holds true and one might say that dark matter resides at the surface of the blackhole.

  3. What could happen in blackhole binaries? Could the phase transition of dark matter to ordinary matter take place or could dark matter reside inside blackhole for v0/c ≥ 1/2? This would suggest large spin at the surface of blackhole. Note that the angular momenta of dark matter - possibly at the surface of blackhole - and ordinary matter in the interior could cancel each other.

    The GRT based model GW150914 has a parameter with dimensions of velocity very near to c and the earlier argument leads to the proposal that it approaches its maximal value meaning that Λ approaches rS/2. Already Λ=rS allows to regard dark matter as part of blackhole: dark matter would reside at the surface of blackhole. The additional dark matter contribution could explain the large mass of GW150914 without giving up the standard view about how stars evolve.

  4. Could magnetic fields explain the low spin of the components of GW150914? In TGD based model for blackhole formation magnetic fields are in a key role. Quite generally, gravitational interactions would be mediated by gravitons propagating along magnetic flux tubes here. Sunspot phenomenon in Sun involves twisting of the flux tubes of the magnetic field and with 11 year period reconnections of flux tubes resolve the twisting: this involves loss of angular momentum. Something similar is expected now: dark photons, gravitons, and possibly also other parts at magnetic flux tubes take part of the angular momentum of a rotating blackhole (or star). The gamma ray pulse observed by Fermi telescope assigned to GW150914 could be associated with this un-twisting sending angular momentum of twisted flux tubes out of the system. This process would transfer the spin of the star out of the system and produce a slowly spinning blackhole. Same process could have taken place for the component blackholes and explain why their spins are so small.

  5. Do blackholes of the binary dance now? If the gravitational Compton length Λgr= GM/v0 of dark matter particles are so large that the other blackhole is contained within the sphere of radius Λgr, one might expect that they form single quantum system. This would favor v0/c considerably smaller than v0/c=1/2. Tidal locking could take place for the ordinary matter favoring parallel spins. For dark matter antiparallel spins would be favored by vortex analogy (hydrodynamical vortices with opposite spins are attracted).

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.



Friday, September 09, 2016

Is cold fusion taking place in 175 year battery still working?

Elemer Rosinger had a Facebook link to an article telling about Clarendon dry pile, a very long-lived battery providing energy for an electric clock (see this, this, and this ). This clock known also as Oxford bell has been ringing for 175 years now and the article suggests that the longevity of the battery is not really understood. The bell is not actually ringing so loud that human ear could hear it but one can see the motion of the small metal sphere between the oppositely charged electrodes of the battery in the video.

The principle of the clock is simple. The gravitational field of earth is also present. When the sphere touches the negative electrode, it receives a bunch of electrons and gives the bunch away as it touches positive electrode so that a current consisting of these bunches is running between electrons. The average current during the oscillation period of 2 seconds is nanoampere so that nanocoulomb of charge is transferred during each period (Coulomb corresponds to a 6.242 × 1018 elementary charges (electrons)).

The dry pile was discovered by priest and physicist Giuseppe Zamboni at 1812. The pile consists of 2,000 pairs of pairs of discs of tin foil glued to paper impregnated with Zinc sulphate and coated on the other side with manganese dioxide: 2,000 thin batteries in series. The operation of battery gradually leads to the oxidation of Zinc and the loss of manganese dioxide but the process takes place very slowly. One might actually wonder whether it takes place too slowly so that some other source of energy than the electrostatic energy of the battery would be keep the clock running. Karpen pile is analogous battery discover by Vasily Karpen. It has now worked for 50 years.

Cold fusion is associated with electrolysis. Could the functioning of this mystery clock involve cold fusion taken seriously even by American Physical Society thanks to the work of the group of prof. Holmlid. Electrolytes have of course been "understood" for aeons. Ionization leads to charge separation and current flows in the resulting voltage. With a feeling of deep shame I must confess that I cannot understand how the ionization is possible in standard physics. This of course might be just my immense stupidity - every second year physics student would immediately tell that this is "trivial" - so trivial that he would not even bother to explain why. The electric field between the electrodes is immensely weak in the scale of molecules. How can it induce the ionisation? Could ordinary electrolytes involve new physics involving cold fusion liberating energy? These are the questions, which pop up in my stupid mind. Stubborn as I am in my delusions, I have proposed what this new physics might be with inspiration coming from strange experimental findings of Gerald Pollack, cold fusion, and my own view about dark matter has phases of ordinary matter with non-standard value heff=n× h of Planck constant. Continuing with my weird delusions I dare ask: Could cold fusion provide the energy for the "miracle" battery?

To understand what might be involved one must first learn some basic concepts. I am trying to do the same.

  1. Battery consistes of two distinct electrochemical cells. Cell consists of electrode and electrolyte. The electrodes are called anode and catode. By definition electron current along external wire flows to catode and leaves anode.

  2. There are also ionic currents flowing inside the battery. In absence of the ionic currents the electrodes of the battery lose their charge. In the loading the electrodes get their charges. In the ideal situation the ionic current is same as electron current and the battery does not lose its charging. Chemical reactions are however taking place near and at the electrodes and in their reversals take place during charging. Chemical changes are not completely reversible so that the lifetime of the battery is finite.

    The ionic current can be rather complex: the carriers of the positive charge from anode can even change during the charge transfer: what matters that negative charge from catode is transferred to anode in some manner and this charge logistics can involve several steps. Near the catode the currents of positive ions (cations) and electrons from the anode combine to form neutral molecules. The negative current carriers from catode to the anode are called anions.

  3. The charge of the clectrochemical cell is in the electrolyte near the surface of the electrode rather than inside it as one might first think and the chemical processes involve neutralization of ion and the transfer of neutral outcome to or from the electrode.

  4. Catode - or better, the electrochemical cell containing the catode - can have both signs of charge. For positive charge one has a battery liberating energy as the electron current connecting the negative and positive poles goes through the load, such as LED. For negative charge current flows only if there is external energy feed: this is loading of the battery. External voltage source and thus energy is needed to drive the negative charges and positive charges to the electrodes. The chemical reactions involved can be rather complex and proceed in reverse direction during the loading process. Travel phone battery is a familiar example.

    During charging the roles of the anode and catode are changed: understanding this helps considerably.

Could cold fusion help to understand why the Clarendon dry pile is so long lived?
  1. The battery is series of very many simpler batteries. The mechanism should reduce to the level of single
    building brick. This is assumed in the following.

  2. The charge of the battery tends to be reduced unless the ionic and electronic currents are identical. Also
    chemical changes occur. The mechanism involved should oppose the reduction of the charging by creating positive charge to the catode and negative charge to the anode or induce additional voltage between the electrodes of the battery inducing its loading. The energy feed involved might also change the direction of the basic chemical reactions as in the ordinary loading by raising the temperature at catode or anode.

  3. Could be formation of Pollack's exclusion zones (EZs) in the elecrolytic cell containing the anode help to achieve this? EZs carry a high electronic charge. According to TGD based model protons are transformed to dark protons at magnetic flux tubes. If the positive dark charge at the flux tubes is transferred to the electrolytic cell containing catode and transformed to ordinary charge, it would increase the positive charge of the catode. The effect would be analogous to the loading of battery. The energy liberated in the process would compensate for the loss of charge energy due to electronic and ionic currents.

  4. In the ordinary loading of the battery the voltage between batteries induces the reversal of the chemical processes occuring in the battery. This is due to the external energy feed. Could the energy feed from dark cold fusion induce similar effects now? For instance, could the energy liberated at the catode as positively charged dark nuclei transform to ordinary ones raise the temperature and in this manner feed the energy needed to change the direction of the chemical reactions.

This model might have an interesting application to the physics of cell membrane.
  1. Cell membrane consisting of two lipid layers defines the analog of a battery. Cell interior plus inner lipid layer (anode) and cell exterior plus outer lipid layer (catode) are analogs of electrolyte cells.

    What has been troubling me for two decades is how this battery manages to load itself. Metabolic energy is certainly needed and ADP-ATP mechanism is essential element. I do not however understand how the membrane manages to keep its voltage.

    Second mystery is why it is hyperpolarization rather than polarization, which tends to stabilize the membrane potential in the sense that the probability for the spontaneous generation of nerve pulse is reduced. Neither do I understand why depolarization (reduction of the membrane voltage) leads to a generation of nerve pulse involving rapid change of the sign of the membrane voltage and the flow of various ionic currents between the interior and exterior of the cell.

  2. In the TGD inspired model for nerve pulse cell interior and cell exterior or at least their regions near to lipid layers are regarded as super-conductors forming a generalized Josephson junction. For the ordinary Josephson junction the Coulombic energy due to the membrane voltage defines Josephson energy. Now Josephson energy is replaced by the ordinary Josephson energy plus the difference of cyclotron energies of the ion at the two sides of the membrane. Also ordinary Josephson radiation can be generated. The Josephson currents are assumed to run along magnetic flux tubes connecting cell interior and exterior. This assumption receives support from the strange finding that the small quantal currents associated with the membrane remain essentially the same when the membrane is replaced with polymer membrane.

  3. The model for Clarendon dry pile suggests an explanation for the self-loading ability. The electrolytic cell containing the anode corresponds to the negatively charged cell interior, where Pollack's EZs would be generated spontaneously and the feed of protonic charge to the outside of the membrane would be along flux tubes as dark protons to minimize dissipation. Also ions would flow along them. The dark protons driven to the outside of the membrane transform to ordinary ones or remain dark and flow spontaneously back and provide the energy needed to add phosphate to ADP to get ATP.

  4. The system could be quantum critical in the sense that a small reduction of the membrane potential induces nerve pulse. Why the ability to generate Pollack's EZs in the interior would be lost for a few milliseconds during nerve pulse? The hint comes from the fact that Pollack's EZs can be generated by feeding infrared radiation to a water bounded by gel. Also the ordinary Josephson radiation generated by cell membrane Josephson junction has energy in infrared range!

    Could the ordinary Josephson radiation generate EZs by inducing the ionization of almost ionized hydrogen bonded pairs of water molecules. The hydrogen bonded pairs must be very near to the ionization energy so that ordinary Josephson energy of about .06 eV assignable to the membrane voltage is enough to induce the ionization followed by the formation of H3/2O. The resulting EZ would consist of layers with the effective stoichiometry H3/2O.

    As the membrane voltage is reduced, Josephson energy would not be anymore enough to induce the ionization of hydrogen bonded pair of water molecules, EZs are not generated, and the battery voltage is rapidly reduced: nerve pulse is created. In the case of hyperpolarization the energy excees the energy needed for ionization and the situation becomes more stable.

See the chapter Cold fusion again of "Hyper-finite factors, p-adic length scale hypothesis, and dark matter hierarchy". See also the article with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Tuesday, September 06, 2016

Emergence of 3-space or only the emergence of experience about 3-space?

The emergence of 3-space and even 4-D space-time have become fashionable ideas. The motivation is understandable. In string models - or formal string theory as the research field is called after the LHC experience - space-time is replaced with string world sheets in 10-, 11-D or even 12-D target space and the problem is how to get 4-D space-time. Spontaneous compactification believed to lead to Minkowski space with points replaced with Calabi-Yaumanifoled was the first not so succesful dream: landscape catastrophe was the outcome. Branes were the second attempt: 3-branes would give rise to 3-space. They were just added in order to obtain internal consistency but how they would consist of fundamental strings remained completely mystery: hence the dreams about emergence. Landscape catastrophe got only worse but the bonus was that now one had two space-times: when the other one is out of condition, one can use the other one;-).

To me the emergence of space-time does not sound a realistic idea and I have not seen any proposals which would not have been circular and start from the existence of 2-surface in 3-space realizing holography and getting 3-space, which was already there. A good dose of philosophical thinking could have saved us from many fashions of theoretical physics of last decades;-).

A more realistic version of the idea would be that entanglement makes possible the emergence of conscious experience of 3-D space. The existence of 3-space as a geometric object does not require entanglement.

In TGD picture the information characterizing quantum states can be localized by strong form holography (SH) to partonic 2-surfaces (or their light-like metrically 2-D orbits, I am getting pedantic;-)) and string world sheets. However, the possibility of entanglement between partonic 2-surfaces means that it is not enough to give only the 2-D quantum data at partonic 2-surfaces. Also the entanglement between them must be specified. This brings in half-discretized form of 3-dimensionality: not discrete lattice of points but discrete points replaced by partonic 2-surfaces. Strings are accompanied by magnetic flux tubes and the flux tube network gives rise to our conscious experience of 3-space.

In TGD inspired quantum biology biological body (BB) has this kind of magnetic spine, magnetic body (MB) , actually extending far beyond the boundaries of BB and giving rise to third person perspective of conscious experience and explains out of body experiences and near death experiences. EEG makes possible communications to MB and the control of BB by MB.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Is the impossible EM drive possible in TGD Universe?

NASA's impossible EM drive has appeared in Facebook again and again (see this as an example), and I finally lost my patience and decided to learn what it is involved. The Wikipedia article describes the EM drive and gives a lot of references. The original skepticism by mainstream is probably changing to a real curiosity after several replications.

1. Basic facts about EM drive

First some raw data from the Wikipedia article.

  1. According to Wikipedia article, Roger Shawyer, who is behind the concept, has claimed that the prototype produces a total thrust about .02 Newtons using the power by 850 W magnetron. To get some perspective note that in order to move 1 kg weight with the velocity of 1 m/s in the gravitational field g= 10 m/s2, a power of 10 W is required so that the construction might be scalable. The device could operate only few dozen seconds after the magnetron failed due to overheating. Therefore the hypes abut travels to Moon within few hours should be taken cautiously!

  2. There would be no fuel in the conventional sense of the word. Basic conservation laws of momentum and energy however require that if system gains momentum there must be another system gaining opposite momentum. For ordinary rocket this would be exhaust fuel. Now now exhaust has been observed and this is thought to make the drive "impossible". For instance, NASA researchers talk about "quantum vacuum virtual plasma" as the system with which the momentum would be exchanged. Also energy is needed. The magnetron would provide this energy.
The theory of Shawyer for EM drive can be found at here. The basic idea is very simple.
  1. Consider first an ordinary rocket. The fuel explodes and liberates chemical energy and part of exhaust products are allowed to leave the rocket, which experiences the reaction force and gains momentum. One can also modify this rocket a little bit. This is not practical but serves a noble pedagogical purpose. Allow the fuel leak out in opposite directions but in such a manner that the leakage is smaller in the second direction. Rocket accelerates also now since the two forces due to the leakage do not cancel each other.

  2. Next do some abstraction. What matters are conservation laws are energy and momentum, not the medium which carries them. Replace fuel with microwaves in a microwave cavity reflecting forth and back and having energy but no net momentum. Replace the fuel tank by magnetron producing the radiation.

    Arrange the situation so that the leakage of momentum is realized as radiation pressure, which is different at the ends of cavity. For the ordinary fuel this is not a problem and it is difficult to see why it should be so for em fuel. This em fuel would be produced by magnetron in cyclotron transitions with cyclotron transition frequencies equal to resonance frequencies of the microwave cavity. This requres tuning of of the strengths of magnetic field and length of cavity. System would be critical in this sense.

  3. The asymmetry between ends realized somehow would create a net force on the system as difference of the forces at the ends of the cavity. One could interpret this also by saying that the reaction force forces the system to move. The needed momentum exchange would be between radiation field and rocket. Microwave energy and also a net momentum leaves the system just like momentum carrying fuel from ordinary rocket. The dimensionless Q value characterized the flow of energy out of the system. Also the flow of momentum at the ends of the cavity would be proportional to Q.

  4. The claim of Shawyer indeed is that the net forces (pressure times area) at the two ends are different. This would be due to the different group velocities assignable to classical em field at the two ends of the cavity and also due to different area. The argument is that at the smaller end (disk) the group velocity of wave is lower due to the fact that the reflections from the walls of the cavity occur more often so that paths of photons become more zigzagged and the net propagation for energy becomes slower. This argument makes sense to me. Of course, to really understand whether this is the case, would require a detailed modelling of the situation.

2. The problem and its solution in TGD Universe

What is then the problem?

  1. It is argued that the construction breaks momentum conservation. If microwave photons leak out they should heat the cavity and the energy and momentum would leak out as thermal radiation. Is it that this radiation is not observed or is the heating theoretically so small that it cannot be observed? There is however the heating of magnetron, which forces to stop the experiment. As if the energy and momentum would go to the magnetron! Could this microwave energy be enough to achieve the heating of magnetron? Microwaves are indeed used for heating and they might be able to do this. But how the leaking energy and momentum could end up back to the magnetron?

  2. Recall that in the experiments of Russian physicists in which magnetic motor was claimed to start spontaneously acccelerate in in its rotational motion similar breakdown was the problem. Similar breakdown plague also Yildiz motor. I have proposed for both systems a TGD based model involving the magnetic body (MB) of the motor and dark photons and particles. What could cause this breakdown? Could it be that energy and momentum that should have left the system is actually feeded to the magnetron via its MB consisting of flux tubes serving as channels?

2.1. Magnetron

To understand what might be involved consider what magnetron is.

  1. Magnetron produces the microwave radiation and serves obviously as the energy producer. The operation principle of magnetron is as follows. One has two electrodes - the negatively charged cathode and the positively charged anode - at the opposite ends of a cavity (not the microwave cavity) with some length L. Constant electric field is generated between elecrodes. Electrons flow from cathode to anode in this electric field. One adds a magnetic field orthogonal to the plane of the motion for electrons. This fields forces electrons orbits to curve in the plane orthogonal to the magnetic field.

  2. There is a critical value of magnetic field for which electrons just reach the anode. For stronger magnetic field they turn backwards before reaching the anode. Magnetron operates using this critical field. Note that resonance condition defines second criticality condition. Cyclotron photons created in magnetron have frequency, which corresponds to a resonance frequency f= nc/L (c=1 in the sequel) of the cavity and standard quantum theory tells that their energy is given by E= hf. This is incredibly small energy and it is not at all clear whether photons with this energy can cause heating of the magnetron.

2.2 Notions of dark matter and magnetic body

Next the TGD view about dark matter is needed.

  1. Dark matter, also dark photons, has non-standard value of Planck constant heff=n× h is generated in TGD Universe in quantum critical systems, which can appear in all scales. The process can be regarded as a quantum phase transition. One experimental motivation for the hierarchy of Planck constants were the strange quantal looking effects radiation in EEG range (ELF) on vertebrate brain. The eplanation was in terms of dark heff=n× h cyclotron photons. Dark cyclotron photons have energies and therefore also momenta much larger than they should have by E= h× f → =n× h× f.

  2. These dark photons can transform to ordinary photons and vice versa but do not appear in the same interaction vertices with particles with different value of heff - hence darkness for practical purposes. Biophotons would be example of ordinary photons produced from dark photons in this phase transition like process.

  3. The associated notion is magnetic body (MB) consisting of flux tubes and flux sheets and carrying these dark protons. MB can be identified as intentional agent in biosystems and receives sensory input from biological body as dark photon signals and controls it by dark photon signals.

2.3. Is magnetron a quantum critical system generating dark cyclotron photons?

Could it be that magnetron is quantum critical system and generates dark cyclotron photons with large value of Planck constant?

  1. Could the criticality of magnetron imply that part of the cyclotron photons created by the magnetron are actually dark and have much larger energies and momenta than ordinary photons. Could the MB of magnetron be in contact with the second microwave cavity and could the dark cyclotron photons leaking from the ends of the cavity end up to MB and from MB back to magnetron and heat it?

  2. The system is claimed to not produce any visible exhaust products - that is ordinary microwave photons. Could the leaking exhaust product be dark microwave photons and thus not visible and having very large energies?
    Could the dark photon exhaust products end up to the magnetron by the above mechanism. Here they would partially transform to ordinary high energy photons and heat the cyclotron inducing the failure of its operation.

  3. Magnetron produces high energy dark photons, maybe with energies in visible range if the model for biosystems is taken as starting point. One can argue that the description in terms of classical fields gives a realistic estimate for the total power irrespective of the value of heff. Thus the net power would not matter. Microwaves have extremely tiny energies (for 1 meter wavelength a fraction about 10-6 about the energy 1 eV photon, which is just below the visible range). The dark photons transformed to say ordinary high energy photons with energy of visible photons would interact with the condensed matter by inducing molecular transitions and the heating effect could be much more effective than for ordinary radiowave photons. Thus one would have the primary heating of magnetron plus the heating caused by the dark photons from the microwave cavity.

  4. Any owner of microwave oven can however argue that microwaves are very efficient heaters. Why dark photons would be needed? Now I cannot silence the heretic inside me. Do we really know what is happening inside our own microwave ovens? Could also this microwave heating involve dark photons with energies, which correspond to molecular transition energies? Could this be the reason for the unreasonable effectiveness of microwave ovens? Microwave ovens involve also another strange phenomenon - small but visible ball lightnings. Could the visible and UV photons resulting from dark microwave photons heat the air to form a plasma producing the visible radiation? Microwave radiation can also induce "burning of water involving flame of visible light. I have proposed explanations of also these phenomena in terms of dark photons.
If the microwave energy and also momentum returns back to magnetron as dark microwave photons, the magnetron would receive - not only part of the energy - and also part of the momentum opposite to that obtained by the system minus magnetron. If all momentum returns to magnetron, the recoil momentum would not actually leave the system: the travel to the Moon might not succeed!

3. TGD view about the standing waves in wave guide

It has been proposed that the paired photons with sum of electromagnetic fields equal to zero in microwave guide should make possible the leakage of the radiation (see this). I find it difficult to make sense of this argument. This article however inspired to look the situation using TGD based view about em fields.

In photon picture photons would be reflected from the ends of the cavity and also from the walls if the cavity is a cone cut from its ends. In reflection energy is conserved by a momentum which is twice the projection of momentum in orthogonal direction is lost. If the net losses occurring at opposite ends are different, thrust results, even if Q value is vanishing. Only in special case the wave vectors are quantized the net momentum current at the ends of cavity vanishes (discrete translational symmetry). These situations correspond classically standing waves.

In Maxwellian theory em fields should correspond to standing waves with opposite wave vectors. Standing waves in TGD framework are not possible at single space-time sheet. Maxwellian linear superposition fails. The basic solutions are "massless extremals" describing propagation of arbitrary pulses in single direction, left or right, with maximal signal velocity preserving pulse shape. Linear superposition for the pulses travelling in the same direction makes sense. This represents precisely targeted communication.

How to obtain something analogous to standing waves in TGD?

  1. One can have two parallel space-time sheets in which the propagations occur in opposite directions. Tests particle (small 3-surface) touching both sheets experiences the sum of forces created by the classical fields, and this corresponds to that created by a standing wave. More generally, one can have set theoretic unions of MEs and these effective represent linear superposition of waves (actually only of their effects). This is the manner how many-sheeted space-time give rise to the space-time of standard model and GRT.

  2. Suppose the cross section fo wave guide is constant. If only standing waves, that is pairs of MEs, are present, they can disappear from the wave guide only in pairs. The net value of lost momentum vanishes for each lost ME pair and it would seem that one cannot have asymmetry in the case of a wave guide with constant cross section.

  3. If the members of ME pairs have different wave vector components along wave guide the loss of ME pair means a net momentum loss. Could the reflections of MEs at the ends and walls be such that the magnitude of the momentum component in the normal direction not only changes sign but is also reduced so that also energy of photon is reduced. This could be the counterpart for the non-vanishing Q-value.

    The first ME would correspond so a sum of pulse , 2 times reflect reflected pulse, 4 times reflected pulse, etc... The second ME would correspond to sum of 2n+1 reflected pulses and loss of ME pair would mean a net loss of momentum but it could go the walls of the cavity.

  4. In cylindrical geometry the condition that one has standing waves implies k= n2π/L so that the value of n would change in the reflection which would be like quantum transition. The lost 4-momentum would be Δ p4= (Δ p, 2Δ p) =ε (p,2p) , ε<1 and tachyonic. This momentum could go to the wall of the microwave cavity as a whole. One can also imagine that only part of it is lost in this manner and that the momentum splits to part p1=ε(p,p) leaking out as dark photon and p2=ε (0,p) absorbed by the wall of cavity. This contribution would correspond to radiation pressure. Also more general momentum splittings are possible.

  5. Could the lost photon with 4-momentum ε(k,k) go to a magnetic flux tube of magnetron as dark photon? In the general case light-like momentum ε(k,k) should be parallel to the flux tube and the the rest of momentum difference Δ p4 would go to the wall of cavity. If flux tube of the magnetic field of magnetron is parallel to the wall of the cavity, this is not possible. If the flux tubes are parallel to the ends of the cavity, they should absorb the entire Δ p4. This suggests that flux tubes should be nearly orthogonal to either end of the wave guide.

Armed with this picture one can try to answer the question whether one obtain net acceleration lasting the desired time.
  1. Whether one can obtain a net momentum transfer to the MB of the system, depends both the shape of cavity and on the direction distribution of flux tubes and their density at surfaces orthogonal to the average magnetic field. This density is proportional to the average magnetic field. The magnetic field of magnetron is dipole field in the first approximation and the flux tubes form closed loops.

    A good position for the wave guide is such that magnetic field lines meet the second end of the wave-guide nearly orthogonally. The magnetron could be from left or right from the wave guide, maybe nearer to the end with larger area to maximize the number of flux tubes meeting the end. One would obtain dark photons at the magnetic flux tubes leading to the magnetron and - if not else - at least an explanation for why magnetron heats up so fast!

  2. Can one obtain net momentum transfer to flux tubes? This depends both the shape of cavity and on the direction distribution of flux tubes and their density at surfaces orthogonal to the average magnetic field. This density is proportional to the average magnetic field. One would obtain dark photons at the magnetic flux tubes leading to the magnetron and explanation for why magnetron heats up so much!

  3. Is it really possible to obtain accelerated motion in long time scale? System plus its MB does not accelerate unless MB is able to transfer its momentum somewhere, say to a larger MB. This probably poses limits on the distance which the system can move since one naively expects that system and its MB tend to move in opposite directions so that MB would stretch. One expects, that MB can store only a limited amount of momentum to say Bose-Einstein condensate of dark photons.

    The momentum transfer (as dark photons) to a larger MB would require reconnections with it. Reconnection is a standard mechanism in TGD based quantum biology relying strongly on the dynamics ("motor actions") of MB (braiding making possible topological quantum computation, reconnection making possible dynamical flux tube networks, heff changing phase transitions changing the length and the thickness of flux tubes as scales proportional to heff, ..).

See the article Could the "impossible" EM drive be possible in TGD Universe?. For background see the chapter Summary of TGD Inspired Ideas about Free Energy.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.