Skip to content

symmetries and conservation laws

May 14, 2011

One of the most beautiful results in modern physics is the connection between symmetry and conservation laws established by Emmy Noether near the beginning of the 20th century. The present paper served as a prelude to a discussion we had earlier this year about whether such results can be extended to dynamical systems that are more general than those normally considered in classical physics. After defining the Lagrangian formulation of classical mechanics and its transformations under a change in coordinates, it shows how conserved quantities arise from invariance under particular coordinate transformations. A later draft will extend this derivation to continuous (field) systems. PDF

renormalization group

May 14, 2011

Renormalization group (RG) ideas are all the rage in physics these days, and one of my goals for graduate school is to become more familiar with this framework. Here are my thoughts so far.

The renormalization group is an extremely powerful tool that can be used to determine which microscopic aspects of a system are relevant to its macroscopic behavior. It exploits the crucial property that macroscopic systems are — for mathematical purposes — effectively infinite, and that there is a huge range of scales between the microscopic picture and the resolution at which finite-size effects start to manifest themselves.

The critical point

In its simplest application, RG also requires the system to be at a so-called critical point, which can be defined in the following way. Imagine that the system in question can be fully specified by a finite number of knobs or parameters

\mu = (\alpha,\beta,\gamma,\ldots)

and that as we tune one of the knobs \alpha there is a critical point \alpha_c where some extensive macroscopic observable \phi becomes multivalued. In other words, when we tune \alpha through \alpha_c in different copies of the system, individual copies end up with different values of \phi. Since \phi is extensive, we know it can be written as the average of some local quantity \phi(x)

\phi = \frac{1}{V} \int \phi(x) \, dV

where \phi(x) is called the order-parameter. It gives the local value of the multivalued macroscopic observable \phi at different points x within the system. Typically, we define \phi so that its numerical value in the single-valued region is zero.

Thus, with some precision we can now say that a critical point is a point in parameter space where the system average of some order parameter \phi becomes nonzero and multi-valued. Since the entire (effectively infinite) system chooses a particular value of \phi, this requires that the local order parameter \phi(x) is correlated over infinitely large regions. In other words, its correlation length \xi diverges:

\xi \to \infty as \alpha \to \alpha_c

More to come…

statistical mechanics: inference for stationary distributions

September 25, 2010

E.T. Jaynes is famous for his (often quite dogmatic) promotion of the idea that the whole of statistical mechanics is nothing more than a statistical inference problem. While I’m not sure if I buy his entire program, these ideas have gotten me interested trying to figure out which parts of statistical mechanics follow from the microscopic laws of physics  and which parts are inference methods in disguise.

This goal — while mostly philosophical — does have some practical importance. Due to its success in physics, statistical mechanics is often viewed as a “model framework” from which to construct other theories of complex systems. However, it is unrealistic to expect that the microscopic dynamics of social or biological systems is dictated by some Hamiltonian dynamics (unless we go all the way down to the microscopic particles that comprise them). If statistical mechanics depends on these mathematical properties in some crucial way, then there is not much hope for extending its scope to systems that drastically differ in their microscopic dynamics.

In this post, I plan to lay out my current understanding of the foundation of classical statistical mechanics — mainly to record some thoughts that have been floating around in my head for the past few years.

We start with some classical system whose microstates are characterized by canonical coordinates (p,q) that are governed by Hamilton’s equations

\dot{q} = \frac{\partial H}{\partial p} \quad \dot{p} = - \frac{\partial H}{\partial q}

for some Hamiltonian H.  More to come…

The Second Law of Thermodynamics

September 15, 2010

Thermodynamics is an extremely powerful framework for making quantitative predictions about the behavior of macroscopic parameters for systems with huge numbers of interacting components. The problem, of course, lies in justifying this thermodynamic framework from more fundamental physical principles like Newton’s equations of quantum mechanics. The first law of thermodynamics easily lends itself to a more fundamental interpretation, since

dE = \delta Q + \delta W

is nothing more than a bookkeeping method to classify changes in the total energy into (i) those arising from changes to the macroscopic parameters or (ii) to energy absorbed or emitted from the remaining microscopic degrees of freedom.

The second law (which, in one common form, states that “the entropy of an isolated system cannot decrease with time”) is a far trickier beast. Its origins are thought to arise from statistical considerations of the chaotic nature of the system’s time evolution through the microstate space. However, to the best of my knowledge, a truly satisfying derivation has not yet been found. Nevertheless, the experimental evidence in favor of the second law is overhwhelming, and so I find it very reassuring to know that the concept of entropy and the second law can be rigorously derived completely within the macroscopic framework of thermodynamics using only two additional empirical observations.

The first of these is Kelvin’s postulate that

There is no process that can convert heat into work with perfect efficiency.

In other words, it is impossible to transfer all the energy in the microscopic degrees of freedom to the macroscopic degrees of freedom — there is some qualitative difference between the two.

From this postulate, we can show that for any cyclic transformation of a system,

\oint \frac{\delta Q}{T} \leq 0

with equality in the case of a reversible process. This implies that for a reversible process, the fraction \delta Q/T is an exact differential, which we denote by

dS = \frac{ \delta Q}{T}

and so there exists a function of state S with the property that

\int_A^B \frac{\delta Q}{T} = S(B) - S(A)

for any reversible path from state A to state B. This function is known as the entropy.

Now consider a cycle formed by a (possibly irreversible) transformation from A to B followed by a reversible transformation back to state A. We combine the two expressions given above to obtain

\int_A^B \frac{\delta Q}{T} \leq S(B) - S(A)

For an isolated system, dQ must vanish everywhere along the path, and hence the left hand side of this equation vanishes as well. Thus, we obtain the statement that

There exists a function of state S such that in an isolated system, if a state B is adiabatically accessible from an initial state A, then S(B) \geq S(A).

This statement gives us lots of information about the entropy landscape, but it still does not contain any information about the dynamics of the system. After all, just because a change can happen, doesn’t necessarily mean that it will happen. To obtain predictive statements about the dynamics of the system, we must appeal to second empirical assumption, which is that

All systems are constantly subjected to tiny fluctuations in their macroscopic degrees of freedom.

From this assumption, we conclude that there are no adiabatically accessible states for an isolated system in equilibrium — otherwise these small fluctuations would drive the system into a different state and the equilibrium condition would be violated. Combining this result with the entropy inequality implies that an isolated system in equilibrium sits at a local entropy maximum.

Thus, we see that the existence of the entropy function and its maximization principle follow naturally from two rather simple empirical postulates.

random graphs with communities and arbitrary degrees

January 19, 2010

Classical Erdös-Renyi random graphs lack many of the properties commonly associated with networks observed in the real world. In particular, they are restricted to binomial or Poisson degree distributions and are generated without a particular modular structure in mind (i.e., no group of nodes is treated any differently than any other group). Several additional random graph models have been proposed over the years that seek to alleviate these shortcomings. On the one hand, we have the configuration and G(\mathbf{w}) models that produce random graphs with an arbitrary degree sequence (or expected degree sequence). There are also random graphs that incorporate community structure, such as the hierarchical random graph (HRG) or the stochastic block model (SBM). In fact, these models can produce non-Poisson degree distributions as well, but they often suffer from overfitting problems. In the case of the SBM the maximum likelihood estimator for a particular graph is one in which each edge exists with probability p_{ij} = 1 if those nodes are connected in the observed network and p_{ij} = 0 if they are not, so the overfitting is particularly bad.

We can avoid the overfitting problems by restricting our attention to the k+1-parameter SBM. In this model, each node is assigned to one of k distinct communities. An edge between nodes i and j exists with probability p_s if i and j are both in community s and with probability p_0 otherwise. This is a generalization of the 2-parameter SBM used by Hofman and Wiggins in their recent paper on Bayesian community detection, and is suitable for a network with a single layer of community structure. However, the resulting degree distribution is still fundamentally Poisson (now actually the sum of several Poisson distributions).

In my upcoming senior thesis on network community detection, I propose a generalization of the k+1-parameter SBM that allows for an arbitrary degree sequence, effectively extending the configuration model to the realm of community structure. It can also be viewed as a slight generalization of the benchmark graph proposed by Lancichinetti et al that is suitable for maximum likelihood inference. In addition, this new model (which I am tentatively calling the 2-layer configuration model)can be naturally associated with the generative process behind the modularity function (more on this later).

Intuitively, the 2-layer configuration model works as follows. Each node is given a degree k_i and is assigned to one of k distinct communities, which means that each community has a total degree d_s.  Furthermore, each community is given an “affinity” \alpha_s in the range [0,1]. We then assign \alpha_s d_s / 2 edges to each community chosen by randomly picking two stubs within that community and connecting them with an edge (i.e., each community is treated as a separate configuration network with degree sequence \alpha_s k_i). The remaining edges are assigned by randomly choosing two of the unused stubs from anywhere in the network and connecting them with an edge (i.e., we then treat the entire network as a configuration network with degree sequence (1-\alpha_s) k_i). This yields a network with the desired degree sequence that also contains the desired community structure.

For reasons outlined in a previous post, it is often desirable to have a model in which each edge exists independently of all the others. This can be achieved (as in the case of the configuration model) by relaxing the degree requirements to merely specifying the expected degree w_i for each node. However, because of the caveats outlined in that previous post, this model turns out to be rather complicated to write down. With a little work, one can show that the edge probabilities p_{ij} should be given by

p_{ij} = \left\{\begin{array}{ll} \frac{1}{2} \left( \frac{\alpha_{s} w_i^2}{d_{s}} + \frac{(1-\alpha_{s})^2 w_i d_{s}}{\sum_u (1-\alpha_u) d_u} \right) & : \text{if} \, i=j \, \text{and} \, i \in s \\ \frac{\alpha_{s} w_i w_j}{d_{s}} & : \text{if} \, i \neq j \, \text{and} \, i,j \in s \\ \frac{(1-\alpha_{s})(1-\alpha_{t}) w_i w_j}{\sum_u (1-\alpha_u) d_u} & : \text{if} \, i \in s \, \text{and} \, j \in t\end{array}\right.

A detailed derivation will be given in the thesis.

clearing up the configuration model

January 18, 2010

It has long been known that classical Erdös-Renyi random graphs are rather limited in the types of degree distributions they can produce. The degree of any given node follows a binomial distribution, which goes over into a Poisson distribution in the sparse limit. In contrast, many real-world networks possess power law degree sequences that would be extremely rare under a Poisson distribution. To address this problem, many have turned to the so-called configuration model, which produces a random graph with an arbitrary degree sequence.

A configuration network is constructed in the following manner. Each node has a desired degree k_i, so we imagine that each node has k_i edge “stubs” attached to it. Edges are then assigned by randomly choosing two stubs and drawing an edge between them. This results in a relatively simple random graph model that can reproduce any desired degree sequence.

Yet this model presents several computational difficulties. For instance, the individual edges are not independent events, so it is difficult to obtain the probability that two particular nodes are connected. This is important because one might be interested in quantities like the average number of edges lying within a certain set of nodes, which rely on such probabilities.

The solution to these difficulties is to use a slightly different random graph model which merely fixes the expected degree sequence rather  than the actual degree sequence. This model was originally introduced by Chung and Lu, and unfortunately has remained without a name throughout much of the literature. We simply refer to it as the G(\mathbf{w}) model in keeping with the original paper.

According to the original formulation by Chung and Lu, the G(\mathbf{w}) model assigns an expected degree w_i to each node, and each possible edge exists independently with probability

p_{ij} = \frac{ w_i w_j }{\sum_k w_k}

so that the expected degree of a node is given by

\langle k_i \rangle = \sum_j p_{ij} = w_i \sum_j w_i / \sum_k w_k = w_i

as desired.

However, this standard presentation of the G(\mathbf{w}) model ignores one tiny detail: self-links or loops. One can immediately see that such edges are necessary, for without the possibility of loops the expected degree of a node becomes

\langle k_i \rangle = \sum_{j \neq i} p_{ij} = w_i \sum_{j \neq i} w_i / \sum_k w_k = w_i \left( 1 - \frac{w_i}{\sum_k w_k} \right)

and we no longer obtain a graph with the desired expected degree sequence. Of course, in the limit of infinite size (and finite degrees), the discrepancy disappears, which helps explain why this problem has generally gone unnoticed. After all, the G(\mathbf{w}) model was originally proposed in order to facilitate calculations of certain quantities in the limit of infinite size. But in any finite graph (i.e., all real-world networks), this discrepancy can have rather large effects, so we are forced to include self-links. Nor is this a radical departure from other models: the configuration model also allows for self-links (as well as multi-links) because there is a nonzero probability that two stubs from the same node will be chosen to be connected.

However, once we allow for the possibility of self-links, we encounter the tricky question of how such edges are counted toward the degree of node or the number of edges in the network. Most would agree that a self-link should still count for a single edge in the total number of edges m. As for the degree k_i, we note that the famous “handshake theorem” leads to the relation

\sum_i k_i = 2m

between the individual degrees and the total number of edges. If we wish to preserve this useful identity (as well as the handshake theorem that leads to it), we must count each self-link twice when calculating the degree of a given node. This is also necessary to obtain an oft quoted formula for the expected number of edges within a given set of nodes

\langle e_S \rangle = \frac{1}{4m} \left(\sum_{i \in S} k_i \right)^2 = \frac{d_S^2}{4m}

which is used in the derivation of the modularity function. Moreover, this counting scheme agrees with the original configuration model as well: a self-link uses up two stubs from that node, so we must count this edge twice if we are to end up with the desired degree sequence.

Yet if we agree to count each self-link twice when calculating the degree of a given node, the original probability p_{ij} becomes invalid again, because it leads to an average degree of

\langle k_i \rangle = 2 p_{ii} + \sum_{j \neq i} p_{ij} = (w_i \sum_{j} w_i + w_i^2) / \sum_k w_k = w_i \left( 1 + \frac{w_i}{\sum_k w_k} \right)

Thus, the only way to formulate the G(\mathbf{w}) model so that it is exact for finite graphs and preserves the handshake theorem is to take p_{ij} equal to

p_{ij} =\left\{\begin{array}{ll} w_i^2 / 2m & \text{if} \, i=j \\ w_i w_j / 2m & \text{else}\end{array} \right.

degeneracy problems for community detection

January 3, 2010

The degenerate modularity landscape for the TPA metabolic networkIn the summer of 2008 I participated in the REU program at the Santa Fe Institute (which I highly recommend to anyone who is interested), and there I started working on the problem of community detection in complex networks, and in particular, the method known popularly known as modularity maximization. This paper is the result of a collaboration with Aaron Clauset (my mentor) and Yves-Alexandre de Montjoye concerning the significance of modularity-based community detection algorithms in practical contexts. Our main finding is a degeneracy problem for networks with highly modular or hierarchical structure, whereby the modularity function admits an exponential number of high-modularity partitions that consist of highly dissimilar community assignments. This runs counter to the conventional understanding in the literature, where a degenerate modularity landscape is usually assumed to be a sign of weak community structure. Furthermore, since modularity maximization is in general an NP-Hard problem, we cannot necessarily trust the results of approximate maximization algorithms, at least when it comes to community assignments.  The present version of our paper is available here.

Follow

Get every new post delivered to your Inbox.