The generators of a ``Poincare-type'' group in momentum space

  • #1
redtree
298
13
TL;DR Summary
The mathematics of the Poincare group in position space X are well described. However, I have not found an analogous description of generators of an analogous ``Poincare-type'' group in momentum space K, where the boosts, rotations and translations involve k^u (as opposed to x^u in position space).
Can someone share a paper or chapter from a textbook if they know a good one?

I'm curious to see the explicit form of these matrices. In position space, the generators of boosts act on the rapidity, which can be related to velocity in X. Assuming the generators of boosts in K act on rapidity in K, what is the velocity in K related to the rapidity in K?
 
Last edited:
Physics news on Phys.org
  • #2
redtree said:
TL;DR Summary: The mathematics of the Poincare group in position space X are well described. However, I have not found an analogous description of generators of an analogous ``Poincare-type'' group in momentum space K, where the boosts, rotations and translations involve k^u (as opposed to x^u in position space).

I'm curious to see the explicit form of these matrices.
In position space, one typically uses differential operators to represent the Poincare generators. E.g., ##\partial_x## generates translations in the "x" direction. So I'm not sure why you want to use matrices (even though of course one can use 5x5 matrices for that purpose).

And what, precisely, do you mean by "Poincare-type" group? Do you mean simply representing the usual generators in a Fourier-transformed way, or do you mean (e.g.,) to include translations in momentum space (changing the momentum). If the former, ##\partial_x## goes over to ##k_x## (multiplied by some constant depending on your FT conventions).

redtree said:
In position space, the generators of boosts act on the rapidity, which can be related to velocity in X. Assuming the generators of boosts in K act on rapidity in K, what is the velocity in K related to the rapidity in K?
Velocity and rapidity are related by a tanh function. Wikipedia covers this.
 
  • #3
My reference is: Schwichtenberg, Jakob. Physics from symmetry. Springer, 2018.

In this text, matrices are utilized for the generators of the Poincare group and these matrices are 4x4 or 2x2.

In any case, the generators, particularly the boosts, are related to complex rotations over an angle ##\theta##, where ##\theta## often is denoted as ``rapidity'' (though not in this particular text). Rapidity in spacetime, i.e., X-space, can then be related to relative velocity. The Poincare group denotes the spacetime, i.e., X-space, isometries for translations, boosts and rotations.

The derivation of these generators with their Lie algebra is relatively simple. Importantly, the mathematics is not specific to spacetime in the sense that an analogous derivation for analogous generators preserving analogous isometries can be done for any space, such as a momentum space, i.e., K-space, or a generic space Z. The Poincare group specifically denotes these group actions in X-space. A analogous group structure in some other space is not a Poincare group, though it might be called a Poincare-type group in that other space. Both the Poincare group and analogous Poincare-type groups in other spaces can be utilized to represent the symmetries of basis transformations in the space.

Given an analogous derivation of a analogous group structure to the Poincare group but in K-space, the generators of "boosts" in K can be related to complex rotations in K over an angle, where this angle can be denoted as a "rapidity" in K, which then can be related to a velocity in K. Again, all of this is relatively straightforward mathematically.

Just as the Poincare group provides important mathematical structure to physics, it would seem that a "Poincare-type" group in K (or momentum space) might also provide insights, and I wonder if such a group structure has been described explicitly (and if so, where) and if such a structure has been utilized in ways that are analogous to the Poincare group in formulating physical theories.
 
  • #4
redtree said:
My reference is: Schwichtenberg, Jakob. Physics from symmetry. Springer, 2018.
In this text, matrices are utilized for the generators of the Poincare group and these matrices are 4x4 or 2x2.
Do you really mean the Poincare group (10 parameters describing rotations, boosts and translations) or are you interested in the Lorentz group (just the 6 parameters of rotations and boosts)?
Assuming the later, you might consult the 2002 article:
The generators of lorentz transformation in momentum space (Pengfei Zhang & Tunan Ruan)
Abstract: In the momentum space, the angular momentum operator and the boost vector operator, i.e. the generators for the Lorentz transformation of a particle with arbitrary spin and nonzero mass are discussed. Some new expressions are obtained in terms of the orbital and spin parts.
https://link.springer.com/article/10.1360/02ys9025
 
  • #5
What I really want is the Poincare-type group in K-space, which of course is trivial to construct from the Lorentz group in K-space.

The article you quote is interesting but suffers from the following problem:

Given a 4-momentum ##\textbf{p} \in K##:
$$\textbf{p} = \{E_0,p_1,p_2,p_3 \} = \{E_0, \vec{p} \}$$
where ##\vec{p} =\{ p_1,p_2,p_3\}##

The paper utilizes the following equivalence
$$\textbf{p} = \{\gamma_X M, \gamma_X \vec{v}_X M\}$$
where ##M## denotes mass, ##\vec{v}_X## denotes velocity in X-space, such that ##\vec{v}_X \in X \not\in K## and ##\gamma_X = (1 -\vec{v}_X^2)^{-1/2}##, such that ##\gamma_X \in X \not \in K##

From a mathematical perspective, this equivalence implies a mapping ##m: K \rightarrow X##, where
$$m(\textbf{p}): \{E_0, \vec{p} \} \rightarrow \{\gamma_X M, \gamma_X \vec{v}_X M\}$$
which for ##\vec{v}_X = 0## implies
$$m(\textbf{p}) \rightarrow \{m,0 \}$$

Of course, the problem should be clear in this context. Given a mapping ##m: K \rightarrow X##, this formulation of momentum, though a formulation of momentum, is no longer in K space but rather in X space.

I am hoping to see a formulation of a Lorentz-type and Poincare-type groups in K and not in X and thus this paper is not what I am hoping to find. I am specifically looking for formulations that do NOT utilize the mapping ##m##.
 
Last edited:
  • #6
redtree said:
What I really want is the Poincare-type group in K-space, which of course is trivial to construct from the Lorentz group in K-space.
If it's trivial to construct, then why...

redtree said:
I am hoping to see a formulation of a Lorentz-type and Poincare-type groups in K and not in X and thus this paper is not what I am hoping to find. I am specifically looking for formulations that do NOT utilize the mapping ##m##.
...are you even asking this? Don't you already have the answer?

Basically you appear to me to be considering "K-space" as a Minkowski vector space in its own right, independent of any connection with anything else. If that's the case, the Minkowski vector space structure already gives you everything you need to know. What more do you want?
 
  • #7
I am considering K space as the Fourier conjugate space of X space, not a completely independent space, and I am requiring that dynamics in K space respect that Fourier conjugate relationship between X and K spaces.

My opinion is that this exercise is both trivial and useful, and I want to see if it has been done.

Ultimately, the reason for doing this is twofold: 1) Because Lorentz-type and Poincare-type groups in K would seem to be important symmetry groups in K space when considering basis transformations in K; 2) These symmetry groups can then be used to consider Lagrangian densities formulated in K, which of course is its own topic.
 
  • #8
redtree said:
I am considering K space as the Fourier conjugate space of X space
Then I don't understand your objections in post #5, since the mapping you object to is part of the Fourier conjugate relationship.
 
  • #9
How is the mapping ##m## part of the Fourier conjugate relationship? It does not respect the Fourier conjugate relationship between the spaces X and K. This is trivial to demonstrate.

Assuming ##\hbar =1 \rightarrow \textbf{p} = \textbf{k} = \{ k_0, \vec{k} \}##, where ##\vec{k} = \vec{p}##.

For an integral transform ##\mathscr{F}: X \rightarrow K##, i.e., the Fourier transform
$$f(x) \rightarrow \hat{f}(k): \hat{f}(k) = \int_{\mathbb{R}} f(x) e^{-2 \pi i k x} dx$$
with an inverse mapping ##\mathscr{F}^{-1}: K \rightarrow X##, i.e., the inverse Fourier transform
$$\hat{f}(k) \rightarrow f(x): f(x) = \int_{\mathbb{R}} \hat{f}(k) e^{+2 \pi i k x} dk$$

For ##\hat{f}(k) = \vec{k}##, with the mapping ##\mathscr{F}^{-1}: K \rightarrow X##
$$\mathscr{F} \left[ \vec{k} \right] = \frac{\delta'(\vec{x})}{2 \pi i}$$
while for ##\vec{k}##, with the mapping ##m: K \rightarrow X##
$$m\left[\vec{k} \right] = M \gamma_X \vec{v}_X$$

These are independent mappings.
 
  • #10
redtree said:
How is the mapping ##m## part of the Fourier conjugate relationship?
First of all, your Fourier transform in post #9 is wrong. The vector you are calling ##\mathbf{p}## in what you are calling K space (P space would be a better term) is, in Fourier transform terms, a delta function in K space--because it is a vector representing a single 4-momentum, i.e., a single point in K space. The Fourier transform of that is not a delta function in X space. It's the function ##e^{ipx}##.

Second, you didn't include the norm of ##\mathbf{p}## in your analysis. That norm is ##M##. So there needs to be a factor of ##M## included.

Third, what is the thing in X space that you are calling ##m(\mathbf{p})##? It's not a vector; vectors in X space represent spacetime positions, i.e., events, i.e., points in X space--in Fourier transform terms, delta functions. And as above, ##\mathbf{p}## is a delta function in K space, whose Fourier transform in X space is ##e^{ipx}## (with a factor of ##M## put in in the appropriate place). What does this represent in X space? It represents a plane wave. And what is the ordinary 3-velocity of that wave in X space? It is what you are calling ##\vec{v}##. In other words, the thing in X space you are calling ##m(\mathbf{p})## is the Fourier transform of ##\mathbf{p}## to X space.
 
Last edited:
  • #11
PeterDonis said:
First of all, your Fourier transform in post #9 is wrong. The vector you are calling ##\mathbf{p}## in what you are calling K space (P space would be a better term) is, in Fourier transform terms, a delta function in K space--because it is a vector representing a single 4-momentum, i.e., a single point in K space. The Fourier transform of that is not a delta function in X space. It's the function ##e^{ipx}##.
For ##\hbar =1##, the choice of K space or P space is a matter of convention. I also choose to use the Fourier transform convention ##e^{2 \pi i k x}## instead of ##e^{i p x}##. I tried to be explicit about all this.

I apologize, but I don't know what you mean in saying my calculation is incorrect. This is not my calculation, I did it in Mathematica.

$$f(x) = \int_{\mathbb{R}} k e^{2 \pi i k x} dk = \frac{\delta'(x)}{2 \pi i}$$
where ##\delta'(x)## denotes the derivative of the delta function with respect to ##x##

This is the Mathematica code if you want to check:
FourierTransform[k, k, x, FourierParameters -> {0, +2*Pi}]

As to your second point, I was never calculating the norm of ##\textbf{p}##. I apologize again but don't see how the norm has relevance to the discussion.

I defined ##m## as a mapping and ##M## as mass. I was merely using a notation convention where one writes the mapping as a function acting on the variable, similar to ##f(x)## for ##f: X \rightarrow X##. If that notation makes it harder to understand my mathematical point then ignore it.

I would ask you to explain your point that momentum formulated in X space is not a vector. Are you saying that momentum formulated as a function of velocity in X is not in X space just because it does not represent a 4-position? What space is ##d \vec{x}/d x_0## in then, if not X space?
 
Last edited:
  • #12
redtree said:
I don't know what you mean in saying my calculation is incorrect.
In momentum space, which is what you are calling K space, the object you are calling ##\mathbf{p}## is a point. The vector ##\mathbf{p}## is the vector from the origin to that point. But if you are going to Fourier transform it, then what you Fourier transform is a delta function in K space; that is what represents a specific value of the 4-momentum ##\mathbf{p}##.

redtree said:
This is not my calculation, I did it in Mathematica.
You don't need Mathematica to Fourier transform a delta function, or a complex exponential for that matter. You can do it by inspection. But you need to correctly understand what you are transforming.

redtree said:
As to your second point, I was never calculating the norm of ##\textbf{p}##.
You didn't calculate it, you just assumed it when you introduced the quantity ##M## into your equations. That's what ##M## is: the norm of ##\mathbf{p}##.

redtree said:
don't see how the norm has relevance to the discussion.
You could assume it to be ##1## if you like, just as you are assuming ##\hbar = 1##. But you didn't do that. That means you need to include it in your equations. I agree it's a minor point, though. The major issue is correctly understanding what you are transforming.

redtree said:
I defined ##m## as a mapping and ##M## as mass.
Yes, I know that. There is no confusion there and that choice of notation is not a problem.

redtree said:
I would ask you to explain your point that momentum formulated in X space is not a vector.
You can model 4-momentum as a tangent vector in X space if you want; then it's just a directional derivative along a particular curve in X space. But if you are going to Fourier transform between X space and K space, or vice versa, then that's not how you want to formulate momentum in X space.

As far as a Fourier transform is concerned, momentum in X space is the function ##e^{ipx}##, since that's the Fourier transform of the delta function ##\delta(p)## in K space. Or, if you want to be precise about exactly what kinds of products you are taking, it would be ##\delta(\mathbf{p})## in K space (where ##\mathbf{p}## tells you the particular point in K space that denotes the 4-momentum you are interested in), and its Fourier transform in X space would be ##e^{i \mathbf{p} \cdot \mathbf{x}}##, considered as a function over ##\mathbf{x}##, i.e., over the points in X space, with ##\mathbf{p} \cdot \mathbf{x}## being a dot product.
 
  • #13
redtree said:
What space is ##d \vec{x}/d x_0## in then, if not X space?
In the Euclidean 3-space that represents a spacelike 3-surface in X space, picked out by a constant value of the time coordinate ##t## in a chosen inertial frame. X space is a space of 4-vectors, not 3-vectors.

The 4-momentum in X space, considered as a directional derivative along some timelike curve, is ##M\ d\mathbf{x} / d\tau##, where ##\tau## is the affine parameter along the curve.
 
  • #14
Thank you for your response.

Why do you assume that ##\textbf{p} \in K## is a point relative to the origin, let alone one represented by a delta function? I have made neither of those assumptions. A vector may or may not include the origin as one of its endpoints. If I wanted to represent a delta function, I would have written ##\delta(\textbf{p})## or ##\delta(\vec{p})##.

The idea that momentum in momentum space must be represented by a delta function seems highly problematic. For example, it is inconsistent with the derivation of the uncertainty principle. (see https://en.wikipedia.org/wiki/Uncertainty_principle)

My point was to demonstrate that ##\mathscr{F}^{-1}## and ##m## are independent transformations by demonstrating how they act differently on the same mathematical object. That one is an integral transformation and the other a scaling transformation, where ##\hbar, M, \gamma_X## are all scalars, should really be enough to prove the point, but I wanted to demonstrate the mathematical fact explicitly.

The larger point is that I am looking for a published formulation of a Poincare-type group in K (but a Lorentz-type group in K would suffice) formulated without utilizing the scaling transformation represented by ##m##. If you know of any published sources, please let me know.
 
  • #15
redtree said:
Why do you assume that ##\textbf{p} \in K## is a point relative to the origin
It's not an assumption, it's what a vector in K space represents mathematically. Physically, it represents a state with a single definite 4-momentum.

redtree said:
let alone one represented by a delta function?
For purposes of doing a Fourier transform, a state with a single definite 4-momentum, in 4-momentum space, i.e., K space, is a delta function. That's how you describe a single point in K space when you want to Fourier transform.

redtree said:
I have made neither of those assumptions.
As above, they aren't assumptions, they're just correctly recognizing what you need to recognize in order to do correctly what you say you are trying to do.

redtree said:
A vector may or may not include the origin as one of its endpoints.
A vector in a vector space, if you insist on thinking of it as a little arrow with two endpoints, must have the origin as one of its endpoints.

The real fix for this issue is to stop thinking of a vector in a vector space as a little arrow with two endpoints. A vector space is a much more general concept, and even in special cases where you can sort of get away with the "arrow" metaphor, it can still cause confusion, as it is doing for you in this case.

redtree said:
The idea that momentum in momentum space must be represented by a delta function seems highly problematic. For example, it is inconsistent with the derivation of the uncertainty principle. (see https://en.wikipedia.org/wiki/Uncertainty_principle)
Um, what? Even leaving aside that Wikipedia is not a good primary source, the article you reference says:

"Mathematically, in wave mechanics, the uncertainty relation between position and momentum arises because the expressions of the wavefunction in the two corresponding orthonormal bases in Hilbert space are Fourier transforms of one another (i.e., position and momentum are conjugate variables)."

In particular, an eigenstate of momentum is a delta function in momentum space, and the Fourier transform of that, i.e., a complex exponential, in position space. Some sources get uneasy about this (and the Wikipedia article appears to share such uneasiness) because neither a delta function nor a complex exponential are, strictly speaking, in the Hilbert space of square integrable functions, but that is easily dealt with, for example by using the rigged Hilbert space formalism, as discussed in, e.g., Ballentine.

redtree said:
My point was to demonstrate that ##\mathscr{F}^{-1}## and ##m## are independent transformations by demonstrating how they act differently on the same mathematical object.
And your claimed demonstration is wrong, for reasons I have already given.

redtree said:
The larger point is that I am looking for a published formulation of a Poincare-type group in K (but a Lorentz-type group in K would suffice) formulated without utilizing the scaling transformation represented by ##m##.
And because of the various issues that I and others have pointed out, we still don't understand what you are actually looking for, or even whether what you are looking for makes any sense.
 
  • #16
I once found A. Cohen, An Introduction to Lie Theory of One-Parameter Groups, Baltimore 1911, by chance on the Internet. I wanted to read and understand the essential parts of it, for one because I'm interested in the history of science, and second I observed that physics hasn't really performed a transition of language, at least not fully. Many terms are still the same as they were when Cohen wrote that book.

I summarized the concepts in an insights article When Lie Groups Became Physics. The considered groups are far smaller than the ten-dimensional Poincaré group, however, the principles are comparable. So maybe, this helps.
 
  • Like
Likes PeterDonis and dextercioby
  • #17
fresh_42 said:
the ten-dimensional Poincaré group
Mathematically, this is a single group, but physically, it can have (at least) two distinct interpretations. We can think of it as the group of Killing vector fields on Minkowski spacetime, i.e., as the symmetry group of what the OP calls "X space". Or we can think of it as the symmetry group of what the OP calls "K space", and which is usually called "momentum space" in the literature (for example, QFT is most often done in momentum space since energy-momentum is typically what is measured in particle physics experiments). The latter interpretation appears to be what the OP is asking about.
 
  • #18
From Auletta, Gennaro, Mauro Fortunato, and Giorgio Parisi. Quantum mechanics. Cambridge University Press, 2009. See section 2.2.5 Momentum representation, equation 2.148:

$$\hat{p} \tilde{\psi}(p) = p \tilde{\psi}(p)$$

(The same formulation can be found in Fitzpatrick, Richard. Quantum mechanics. World Scientific Publishing Company, 2015, p. 49.)

Similarly, from equation 2.124 in Auletta

$$\hat{x} \psi(x) = x \psi(x)$$

In line with this formulation, I am not assuming that momentum in momentum space, in this case ##p \in P##, is represented by a delta function.

With regards to the uncertainty principle: The uncertainty principle fundamentally describes the variance relationship between Fourier conjugate variables (see https://www.math.stonybrook.edu/~bishop/classes/math533.S21/Notes/Folland_uncertainty.pdf). Thus, for every observation of ##\textbf{k} \in K## and every observation of ##\textbf{x} \in X##, there is a variance associated with that observation represented by ##\sigma_k^2## for ##\textbf{k}## and ##\sigma_x^2## for ##\textbf{x}##, where ##\textbf{k}## and ##\textbf{x}## are vectors, such that

$$\left( \int_{\mathbb{R}^4} \textbf{k}^2 |\tilde{\psi}(\textbf{k})|^2 d\textbf{k} \right) \left( \int_{\mathbb{R}^4} \textbf{x}^2 |\psi(\textbf{x})|^2 d\textbf{x} \right)= \sigma_k^2 \sigma_x^2 = \geq \frac{1}{16 \pi^2}$$

The variance relationship between Fourier conjugate variables is minimized when ##|\tilde{\psi}(\textbf{k})|^2## and ##|\psi(\textbf{x})|^2## are Gaussian PDF's

The delta distribution is related to a Gaussian PDF, where for ##(2 \pi \sigma_K^2)^{-1/2}\text{exp}\left( \frac{-\textbf{k}^2}{2 \sigma_K^2}\right)## (see Crooks, Gavin E. "Field guide to continuous probability distributions." Berkeley Institute for Theoretical Science: Berkeley, CA, USA (2019).)

$$\delta(\textbf{k}) = \lim_{\sigma_K^2 \rightarrow 0} (2 \pi \sigma_K^2)^{-1/2}\text{exp}\left( \frac{-\textbf{k}^2}{2 \sigma_K^2}\right)$$
where, according to the variance relationship between Fourier conjugate variables
$$\sigma_K^2 \rightarrow 0 \implies \sigma_X^2 \rightarrow \infty $$
 
  • #19
Do you feel you've now answered the questions in your original post?
 
  • #20
redtree said:
From Auletta, Gennaro, Mauro Fortunato, and Giorgio Parisi. Quantum mechanics. Cambridge University Press, 2009. See section 2.2.5 Momentum representation, equation 2.148:

$$\hat{p} \tilde{\psi}(p) = p \tilde{\psi}(p)$$

(The same formulation can be found in Fitzpatrick, Richard. Quantum mechanics. World Scientific Publishing Company, 2015, p. 49.)

Similarly, from equation 2.124 in Auletta

$$\hat{x} \psi(x) = x \psi(x)$$

In line with this formulation, I am not assuming that momentum in momentum space, in this case ##p \in P##, is represented by a delta function.
It is not that "momentum in momentum space" in general is represented by a delta function. Eigenstates of momentum, i.e., states that represent one single momentum, are delta functions. In the position representation, eigenstates of position, i.e., eigenstates of ##\hat{x}## for which equation 2.124 in your reference applies, are delta functions ##\delta(x)##. Similarly, in the momentum representation, eigenstates of momentum ##\hat{p}## for which equation 2.148 in your reference applies, are delta functions ##\delta(p)##.

The state you are calling a 4-momentum ##\mathbf{p}## is an eigenstate of momentum: it represents one single momentum. So in momentum space it's a delta function, per the above.

redtree said:
according to the variance relationship between Fourier conjugate variables

$$\sigma_K^2 \rightarrow 0 \implies \sigma_X^2 \rightarrow \infty $$
Yes, and what state in K space corresponds to the limit ##\sigma_K^2 \to 0##? A delta function ##\delta(p)##.
 
  • #21
In general, statistical theories do not formulate observations or measurements as delta functions. Why such a formulation is necessary or even proper in quantum theory is unclear to me.

First of all, the assumption that any particular observation (or eigenstate) should be represented by a delta function implies perfectly accurate detectors. One might also consider detectors as imperfect, where an observation denotes a measurement with some confidence interval (or accuracy), such that instead of fully collapsing the probability density function to a point, an observation decreases the variance to some smaller but still non-zero value centered around the measured value.

Secondly, even if one assumes perfectly accurate detectors, it is the probability density function that is collapsing to a delta function, not the momentum vector in K (in this case), where for a Gaussian PDF
$$\delta(\textbf{k}) = \lim_{\sigma_K^2 \rightarrow 0} (2 \pi \sigma_K^2)^{-1/2}\text{exp}\left( \frac{-\textbf{k}^2}{2 \sigma_K^2}\right)$$

In other words, assuming perfectly accurate detectors, the statement that the momentum in K and the position in X is represented by a delta function is not accurate. It is the PDF associated with the momentum in K and the PDF associated with the position in X that is represented by a delta function.

Thus, for a perfectly accurate detector, measurement implies the following transition:
$$\textbf{k} |\hat{\psi}(\textbf{k}) |^2\implies \textbf{k} \delta(\textbf{k} - \textbf{k}_{measured})$$
where ##\textbf{k}_{measured}## represents the measured value of ##\textbf{k}## by the perfectly accurate detector

For an imperfect detector, measurement implies the following transition in the PDF:
$$\textbf{k} |\hat{\psi}(\textbf{k}) |^2\implies \textbf{k} |\hat{\psi}'(\textbf{k}) |^2$$
or more succinctly
$$\hat{\psi}(\textbf{k}) \implies \hat{\psi}'(\textbf{k})$$
with ##\sigma_K^2 \implies \sigma_K'^2##
 
  • #22
redtree said:
In general, statistical theories do not formulate observations or measurements as delta functions. Why such a formulation is necessary or even proper in quantum theory is unclear to me.
You're the one who brought up Fourier transforms, which, as I've already said, is the context in which the delta function representations (not to mention the complex exponential representations) are relevant.

redtree said:
the assumption that any particular observation (or eigenstate) should be represented by a delta function implies perfectly accurate detectors
You're the one who brought up such states in the first place. If you want to work with wave packets instead, fine, work with them. (But that raises the question of why this thread is in the relativity forum instead of the QM forum--see further comments below.) But you aren't doing that. You keep talking about an object you call ##\mathbf{p}## that represents a single definite momentum. That means an eigenstate of momentum, which if you want to do Fourier transforms, is a delta function in momentum space, and indeed it implies a perfectly accurate measurement, which is of course not physically realizable. But that hasn't prevented physicists from using such mathematical objects when it makes sense to do so. You just have to use them correctly.

redtree said:
even if one assumes perfectly accurate detectors, it is the probability density function that is collapsing to a delta function, not the momentum vector in K
If you're doing quantum mechanics, the two are the same; what you are calling "the momentum vector in K" is just a different (and inappropriate for the purpose you say you want to use it for) representation of the probability density function.

But this is the relativity forum; we're not doing QM here. There are no "probability density functions". Or perhaps you actually are intending to do QM and just haven't realized it, since you brought up Fourier transforms between position and momentum space, which is something that usually arises in a QM context. But in any case, as I've already said, if that's what you're doing, and you're talking about a single definite momentum, then you're talking about a delta function in momentum space, whether you like it or not.
 
  • #23
The larger goal is to consider the symmetries of Lagrangian densities in both X and K. Certainly, that does include elements of QFT, but it also includes the symmetries of basis transformations in both X and K, which is why I raised the question in this forum. My goal is to see if formulations of Poincare-type groups in K exist which are mathematically consistent with the Fourier conjugate relationship between the spaces X and K.

To your point, in statistics, the PDF ##|\hat{\psi}(\textbf{k})|^2## and the observable ##\textbf{k}## are not the same thing. You seem to be conflating the two. For example,
$$\langle \textbf{k} \rangle = \int_{\mathbb{R}^n} \textbf{k} |\hat{\psi}(\textbf{k})|^2 d\textbf{k}$$
where ##\langle \textbf{k} \rangle## denotes the expected value

By your approach, the expected value of a measurement would be as follows, where both the observable and the PDF together are represented by a delta distribution, such that:
$$\langle \textbf{k} \rangle = \int_{\mathbb{R}^n} \delta(\textbf{k}-\textbf{k}_{measured}) d\textbf{k} = 1 $$
which is not correct

However, by the standard statistical approach, which does not conflate the observable and the PDF, we achieve the proper result.
$$\langle \textbf{k} \rangle = \int_{\mathbb{R}^n} \textbf{k} \delta(\textbf{k}-\textbf{k}_{measured}) d\textbf{k}= \textbf{k}_{measured} $$
 
  • #24
redtree said:
My goal is to see if formulations of Poincare-type groups in K exist which are mathematically consistent with the Fourier conjugate relationship between the spaces X and K.
But the responses you have been getting are saying that you already have one staring you in the face and you're rejecting it on grounds that make no sense.

redtree said:
in statistics, the PDF ##|\hat{\psi}(\textbf{k})|^2## and the observable ##\textbf{k}## are not the same thing.
##\textbf{k}## is not an observable, by your own logic, since it would require infinite measurement accuracy.

redtree said:
You seem to be conflating the two.
No. I'm not claiming that the delta function ##\delta(\textbf{k})## is a PDF.

redtree said:
By your approach, the expected value of a measurement...
No. I have made no such claim.
 
  • #25
Though I may disagree with the assumption of infinite measurement accuracy, the standard statistical approach the I utilize is fully capable of modeling such an assumption.

In statistics, there is more to a measurement than the measured value. There is also the variance. Based on the variance and the measured value together, we can calculate the confidence interval, where the confidence interval describes how confident we can be as to the accuracy of our measurement.

Whether one assumes a perfectly accurate detector or an imperfect one,
##\textbf{k}## represents the measured value and thus represents an observable in both cases. The difference between the two cases is in the variance as represented by the confidence intervals. For a perfectly accurate detector, the variance approaches zero, such that we can be fully confident that the measured value is a completely accurate measure of the actual value of the object being measured. For an imperfect detector, we have a confidence interval. In derivations of the uncertainty principle, i.e., the variance relationship between Fourier conjugate variables, it is the variance and NOT the measured value that is being related.

Since you claim that I am not understanding your approach, I would ask that you write explicitly in standard mathematical notation, i.e., not Dirac notation, the transition from an unmeasured to a measured system so that I can see how exactly you are utilizing the delta function. I have done that already for the approach I suggest.

You state the delta distribution is not related to a PDF when I have demonstrated how the assumption of zero variance turns a Gaussian PDF into PDF represented by a delta distribution, where the assumptions of zero variance and a measured value ##\textbf{k}_{measured}## are what characterize measurement by a perfectly accurate detector.

PeterDonis said:
But the responses you have been getting are saying that you already have one staring you in the face and you're rejecting it on grounds that make no sense.

The responses I have been getting assume that the mapping ##m## is somehow consistent with the Fourier transform. It is obvious that an integral transformation, such as the (inverse) Fourier transform, and a scaling transformation, such as the mapping ##m## as defined above, are anything other than independent transformations and that one cannot utilize both mappings to transform between equivalent spaces, i,e., ##X \leftrightarrow K##, where ##\mathscr{F}^{-1}: K \rightarrow X## and ##m: K \rightarrow X##. The properties of Fourier conjugate spaces deriving from the Fourier transform do not exist in the mapping ##m##.

I merely asked for published formulations of Poincare-type groups in K that do not utilize the mapping ##m##. At this point, it seems that the answer to this question is that none exist, however trivial such a formulation might be to derive.
 
Last edited:
  • Sad
Likes weirdoguy
  • #26
redtree said:
At this point, it seems that the answer to this question is that none exist
It does appear that nothing exists that will satisfy you, yes.

redtree said:
however trivial such a formulation might be to derive.
See my response to this in post #6. Have we just been wasting our time since then?
 
  • #27
With regard to #6, I was hoping to use an already published formulation, but if there is none, then I'll just derive it.

I am still very interested to see an explicit mathematical formulation of your approach regarding the delta distribution and measurement. I provided a mathematical formulation consistent with standard statistical approaches that clearly represents my perspective. Other than disagreeing, you have not demonstrated mathematically why my approach is incorrect. I would be very interested to see a formal demonstration regarding why the standard statistical approach I take is incorrect.

I think it would be also very useful to see an explicit mathematical formulation of your perspective.

In my opinion, there is great utility in these things even if it does not change #6.
 
  • #28
redtree said:
I am still very interested to see an explicit mathematical formulation of your approach regarding the delta distribution and measurement.
I said nothing about the delta function and measurement. I said that delta functions and complex exponentials are relevant for doing Fourier transforms, which is what you said you were interested in. I had thought that would be obvious, since the two are Fourier transforms of each other and that relationship is a direct expression of the "conjugate relationship" between momentum and position space. But all that has nothing to do with measurement.
 
  • #29
Somehow, I was under the impression that we were discussing detectors, measurements, PDF's and delta distributions over the last several posts.

In that case, if your point was only that one can Fourier transform a delta distribution when appropriate and that complex exponentials are relevant for Fourier transforms, then I agree, and we can close the thread.
 
  • #30
redtree said:
we can close the thread
Done.
 

Similar threads

  • Special and General Relativity
Replies
1
Views
1K
  • Special and General Relativity
Replies
3
Views
1K
  • Special and General Relativity
Replies
1
Views
1K
  • Special and General Relativity
Replies
15
Views
955
  • Special and General Relativity
2
Replies
41
Views
2K
  • Quantum Physics
Replies
4
Views
2K
  • Special and General Relativity
Replies
7
Views
1K
  • Special and General Relativity
2
Replies
36
Views
3K
  • Special and General Relativity
Replies
3
Views
901
Replies
31
Views
3K
Back
Top