https://www.euro-online.org/enog/inoc2007/Papers/mac-slots.html https://www.euro-online.org/enog/inoc2007/Papers/m https://www.euro-online.org/enog/inoc2007/Papers/mac-slots.html

Polarization of the light, and an addition to number of principle

ianm's picture
Forums: 

5. Polarization of the light, and an addition to number of principle components to use.

A recent question received two responses, which are given after the repeat of the question.

Question:

In a recent exchange, it was noted that gratings can emit polarized light. This made we wonder if it is advantageous to have polarized light. Or is it a problem? And if a problem, why (or in what situation is it deleterious)? Conversely, if it is advantageous, why? Anyone?

Responses:

1st (from Jim Reeves)

I am stepping way outside my area here, so I may be completely wrong, but I know that mid-infrared radiation is sometimes polarized and used to study polymer structure/orientation. My first reaction to the question is that there might be circumstances where having polarized light would give different results than if it wasn't. For example, if one was using transmission to study polymer films, how the films were oriented in the beam might influence the results.

2nd (from Howard Mark)

The physics of gratings causes the diffracted (and separated) light to be polsrized, which in itself is not necessarily a problem, but which can cause problems for analytical chemists if that light interacts with other polarization-sensitive components (of course, you also first have to define the term "problem"!).

One common situation where this can occur is probably not sufficiently well appreciated by the analytical community: the physics of light interaction with any dielectric (i.e., any non-metal) indicates that unless it strikes the surface at exactly 90 degrees, the so-called parallel polarization is preferentially reflected, which thus impresses partial polarization on the reflected ray. The degree of polarization depends on the angle of incidence and the refractive indices of both media. If the incident ray arrives at what is known as Brewster's angle, the reflected ray is completely polarized. By conservation of energy, the portion refracted into the material is complementary to the portion reflected, therefore the refracted ray is also polarized.

All this happens, incidentally, even if the material is not itself optically active. If optical activity is involved, then the situation is even more complicated. It is clear, however, that if the incident light is itself polarized (e.g., because it came off a grating), then the amount reflected (or refracted) will depend very sensitively on the geometry of the situation as well as on the chemistry and environment of the samples: e.g., refractive index is usually temperature-sensitive.

The other side of the coin, of course, is that these effects have been used for just that purpose: to assess the purity of enantiomers, etc. In this case it is not a problem, just another application of known physical phenomena.

Then there is another question, with one response. The question was:

Here is somewhat of a theoretical question, concerning principal components. The background to the question is: When performing a regression on a simple mixture of two components, the regression coefficients will have positive signs for the component being modelled and negative signs for the other, making the total for both constituents add up to 100%. For more complicated situations, this addition to 100% also can hold. That is, there are negative and positive signs for the regression coefficients, and some of the combinations of principal components interact in fashion to bring (and keep) the total to 100% (more easily visualized in PCA than PLS).

When making decisions on how many principal components to use in a model, cross-validation is often relied upon. By observing when the standard error vs the number of principal components reaches a minimum - but that minimum is, I understand, a function of random error only, the modeler can use that number of principal components. Thus, the question is: Wouldn't there be a need for ensuring the total of the component levels, known and unknown, add to 100%? And if so, would this mean fewer or more principal components than predicted by cross-validation which is based on random error/residuals alone?

And the response was:

The word "need" is perhaps a bit strong in this case. It is a truism, of course, that the components of a mixture always add to 100% - - whether we know what their individual concentrations are or not, and this lies at the center of the problem. In simpler times, i.e.: before everybody became enamored of PCA and PLS, the equivalent discussion was over the use of the "Beer's Law formulation" (also called the K-matrix approach) versus the "Inverse Beer's Law formulation" (the P-matrix approach). Chris Brown wrote a nice description of this (Brown, C.; "Beer's Law versus Inverse beer's Law I: Effect of Unknown Impurities"; Spectroscopy, 1(4), p.32-37 (1986)).

There are different advantages to the two approaches. On the one hand, the Beer's Law approach allows you to determine the spectra of the various components in the mixture, as well as their concentrations. These spectra are mathematical factors that replace Principal Components in a development that is otherwise similar. The downside of this is that you have to do just what you are asking about: measure the concentrations of all the components in your samples, whether or not you are interested in them.

The biggest advantage of the Inverse Beer's law formulation (according to Chris), which is essentially what we use in NIR, is that you need only measure the concentrations of those constituents you are interested in and want to generate calibrations for. Actually, there are other. much more fundamental, reasons for preferring the Inverse Beer's Law formulation; in fact, I have just submitted a column discussing the point; I expect it will come out in about February or March - keep your eyes open.

In terms of the spectroscopy: yes, if you measured all those concentrations, there is a good deal of additional information you could get about your samples and the calibration resulting from your measurements, including better estimates of the number of factors to use. The purpose of using mathematically-derived quantities on which to base your decisions is simply the fact that we don't normally measure all that data, most of which would be considered extraneous, certainly not just for routine applications purposes. Since the spectroscopic measurments are much easier to make than the reference laboratory measurements, we try to get as much information out of them as we can. Thus we use cross-validation and other statistics calculated from the spectroscopic data to base our decisions on, in place of data that we don't have.

Howard