Robert Dodier <robert.dodier at gmail.com> writes:
> On 2013-10-08, Rupert Swarbrick <rswarbrick at gmail.com> wrote:
>> I'm a bit confused about this bit. This is sine/cosine rather than their
>> inverses. So at the edge of their output ranges, they have zero
>> derivative. Since the trig functions are analytic with well behaved
>> coefficients, I'm interested to know what can go weird?
>
> Well, linear combinations are by far the easiest to handle, since it is
> just rescaling and shifting. But anything other than a linear
> combination changes the shape too. E.g. suppose foo is a Gaussian
> density (easiest to deal with), then sin(foo) will be some asymmetrical
> shape with steep peaks at 1 and -1, and perhaps with another peak
> somewhere in the middle (depends on the mean & s.d. of foo). sin(foo)
> is approximately Gaussian if its mean is several s.d. away from peaks
> and troughs of the sine function -- otherwise it's "interesting".
Ah, now I understand what you meant. Thanks!
>
>> I agree with this and I'd be interested to read what people have done on
>> the subject. Presumably cleverer than "Input noise with some
>> distribution. Record output distribution" ! Where should I be looking
>> for more information?
>
> Well, in some simple cases, it is possible to find the density exactly
> via a change of variables method. E.g. see Papoulis, "Probability,
> Random Variables, and Stochastic Processes", somewhere in the first
> few chapters. But that might be hard to work with -- e.g. can you find
> the convolution of that density with another one (to find the density
> of a sum of variables). Also if there are variables which are not
> independent, that will make the problem more complicated too.
> If you're interested in this topic, feel free to drop me a line.
Hah, this reminds me of my third year as an undergraduate. I'd just
moved in with some statisticians and, full of hubris as a "proper
mathematician", I spent an entire day discovering the hard way how
difficult it is to get the density of a sum of random variables
("they're i.i.d! How hard can it be?") Nowadays, I start with a little
more humility... :-)
And thanks for the offer, but my evenings at the moment are taken up
learning (the basics of) signal processing and the intricacies of
writing fast and accurate fixed-point algorithms to run on a DSP. I'm
starting to appreciate what all the fuss was about when floating point
hardware first appeared!
Rupert
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 315 bytes
Desc: not available
URL: <http://www.math.utexas.edu/pipermail/maxima/attachments/20131009/cee5b719/attachment.pgp>