Thursday, January 5, 2023

Conditional expectation

Let X be a sample space, F be a sigma algebra and G a sub-sigma algebra of F.

Conditional expectation is a projection of random variables, E:L2(X,F)L2(X,G). The operator can be extended to L1(X,F) and is a Markov operator.
 
How is it related to conditional probability?
 
Let A be an event, {Bi} be a decomposition of X and G be the sub-sigma algebra generated by {Bi}. The conditional expectation of the indicator function 1A is a linear combination of 1Bi. Here the coefficients are the conditional probabilities P(A|Bi)! In this sense, the conditional expectation packages all the conditional probabilities together.
 
More generally, given a C-subalgebra B of a C-algebra A having the same unit, conditional expectation is a positive, surjective, unital operator T:AB such that T2=T.
 
Rotation algebra is the universal C-algebra generated by unitaries u and v with uv=e2πiθvu. Cuntz algebra is the universal C-algebra generated by isometries Si where summation of iSiSi=1.
 
Conditional expectation is used to prove the simplicity of rotation algebra and Cuntz algebra.
 
References:
* Functional Analysis for Probability and Stochastic Processes, Adam Bobrowski

Saturday, April 12, 2014

Two little remarks on sphere

1. "Take a sphere of radius R in N dimensions, N large; then most points inside the sphere are in fact very close to the surface." David Ruelle

Let R>0 and 0<α<1. Let B(x;R)={(x1,,xN):x12++xN2R2}. The fraction of the volume of B(x;αR) to the volume of B(x;R) is αN, and limN0αN=0. It means that "a full sphere of high dimension has all its volume within a "skin" of size ε near the surface" (Collet and Eckmann). This phenomenon seems to be related to the concentration of measure and other probabilistic perspectives.


2. A matrix A is positive if and only if all of its eigenvalues are positive. We write A0.

A 2-by-2 positive matrix is in the form A=[t+xy+izyiztx] where t0 and x2+y2+z2t2.

The matrix [t0+x0y0+iz0y0iz0t0x0] can be identified with the ball
B(x0,y0,z0;t0):={(x,y,z)R3:(xx0)2+(yy0)2+(zz0)2t02}.

Let Aj=[tj+xjyj+izjyjizjtjxj] for j=1,2. We have the following equivalence:
A1A2(x1x2)2+(y1y2)2+(z1z2)2(t1t2)2 and t1t2B(x2,y2,z2;t2)B(x1,y1,z1;t1).
In other words, the ordering of the matrices corresponds to the inclusion of the balls!


Tuesday, February 4, 2014

Matrix multiplication as a convolution


1. The product of two power series/polynomials is (a0+a1x+a2x2+)(b0+b1x+b2x2+)=c0+c1x+c2x2+ The coefficients given by cn=k=0nakbnk is sometimes called the Cauchy product. This is a convolution.

2. Let G be a finite group and C[G] be the group algebra with complex coefficients. Let f=xf(x)x,g=yg(y)y be two elements in C[G]. The product is fg=z(xy=zf(x)g(y))z. When f and g are treated as functions f,g:GC, (fg)(z)=xy=zf(x)g(y) is a convolution. In fact, C[G] is an example of convolution algebra L1(G).

3. Let J={1,...,n} and G=J×J={(i,j):1i,jn}. G is a groupoid: not every pair of elements in G can be composed, (a,b) can only be composed with (c,d) when b=c. In such case, (i,j)(j,k)=(i,k). 
Let us consider the groupoid convolution algebra C[G] as above. The convolution in this case is then (fg)(i,k)=(i,j)(j,k)=(i,k)f(i,j)g(j,k). If we rewrite this in another format, it should look more familiar: (AB)ik=i=1nAijBjk. This is matrix multiplication.

Sunday, July 7, 2013

!

"Take a sphere of radius R in N dimensions, N large; then most points inside the sphere are in fact very close to the surface."

Saturday, June 22, 2013

From combinatorics to entropy

From combinatorics to entropy:
Let N=n1+...+nk and pi=niN.
log(N!n1!...nk!)Nipilogpi
by Stirling's formula.

I wonder if this was the first time ever in human history that such an expression ipilogpi appeared! Entropy is often too abstract to me. The approximation above is a link between counting combinations and entropy, and it seems to provide the most concrete grasp~
This is the genius of Boltzmann, Maxwell and Gibbs which leads to the development of statistical mechanics.

Energy, entropy, free energy, enthalpy, Legendre transform, etc. are still difficult to understand to me.



Renyi/generalized entropy:
Dq=1q1logipiq
It is related to the generalized dimension dimq(μ) and Lq-spectrum τ(q):=lim infr01logrlogsupiμ(Bi)q:
dimq(μ)=limr0Dqlogr=τ(q)q1.

Thursday, May 9, 2013

Fourier coefficients as eigenvalues/spectrum

In this post I want to make a connection between Fourier coefficients and eigenvalues/spectrum. Let me put the claim up front:
If λ=f^(n), then fλδ0 is not invertible with respect to the convolution product.
Please feel free to jump directly to the end for the explanation.

Let T=[0,1] be identified with the circle {z:C:|z|=1}={e2πit:t[0,1]}. An integrable periodic function fL1(T) can be associated with its decomposition as a Fourier series
f(t)n=f^(n)e2πint,
where the Fourier coefficients are defined by
f^(n):=01f(t)e2πintdt.
Intuitively the Fourier coefficients f^(n) tell us the weights of the components e2πint within f. f^ is the spectrum of the signal f.

In linear algebra, the spectrum of a square matrix (or more generally a bounded linear operator) A is the set of eigenvalues {λC:AλI  is not invertible }. Extending this, the spectrum of an element a in a unital Banach algebra A is {λC:aλ1 is not invertible }. Is there any way to connect this spectrum with the spectrum of a function?

Let us recall the definition of convolution of two functions:
fg(x):=01f(xt)g(t)dt.
It has the following property: fg^(n)=f^(n)g^(n). Note that L1(T) is an algebra with respect to this convolution product (not with respect to the usual multiplication)! However, L1(T) does not have an identity element (Is there a function such that f * g = f for all f?), we cannot talk about invertibility in this algebra. (Hence in harmonic analysis we need something called approximate identity as a substitute, sometimes appearing as summability kernel in classical case.) For convenience, let us move to a larger algebra M(T) which contains L1(T).

M(T) is the algebra of regular Borel measures on T. A function fL1(T) is identified with the measure f(t)dtM(T). The convolution product and Fourier coefficients of a measure is extended as follows:
fμ(x):=01f(xt)dμ(t),μν(E):=01011E(x+y)dμ(x)dν(y)=01μ(Ey)dν(y)
and
μ^(n):=01e2πintdμ(t)
where 1E is the indicator function of E.  μν^(n)=μ^(n)ν^(n) is satisfied analogously.

Now we have an identity element in M(T), namely the Dirac measure δ0:
fδ0(x)=01f(xt)dδ0(t)=f(x).
Its Fourier transform is δ^0(n)=01e2πintdδ0(t)=1 for all n.

Finally, we can show our claim, by considering the contrapositive of the statement.
Suppose fλδ0 is invertible, then (fλδ0)h=δ0 for some hM(T). Taking Fourier transform we have (f^(n)λ)h^(n)=1 for all n. This implies that λf^(n) for all n.


Remark:
1. Originally I want to prove the converse as well, but it involves some theory of Gelfand transform.

2. To be frank, this may not be the best way of presenting f^(n) as eigenvalues. A more proper explanation would be along the line of eigenspace decomposition of the regular representation
π:L1(T)B(L2(T))π(f)g:=fg. In this case we can really claim that λ=f^(n) if and only if λ is an eigenvalue of π(f).

3. I hope this post is still an interesting observation though, and will arouse your interest in the linkage between Fourier analysis, spectral theorem and representation theory. The interplay of the group action with the differential operator is behind the scene. One may also ponder on this question: why is e2πint so special?

Personally the motivation along this investigation is the line of different generalizations of spectrum, from eigenvalues to spectrum of a ring and a C-algebra!