The Math Thread - Page 2
Forum Index > General Forum |
Nin54545
8 Posts
| ||
HKTPZ
105 Posts
| ||
Nesserev
Belgium2760 Posts
| ||
Oshuy
Netherlands529 Posts
On June 09 2017 23:21 Nin54545 wrote: Thank you for the A Manit0u , i am not so confident in my ability , i will study equations in a few years... Then it is probably too early In this specific case, sin(30), cos(60) and log100(10) are just clever ways of writing 1/2, most sqrt() also disappear. Pretty much everything simplifies nicely ; it looks basically like an equation to write something simple in a complicated way. The weird ones are - 14.661 and 21584, which seem arbitrary. Probably aimed at obtaining a given number as a result, but not an elegant way to do it - D just looks wrong. Probably 512 and not 5*sqrt(2), but even then it should probably read (512*0.5)² instead of 512*(0.5)² (and then D=8) - sqrt(1/2.sqrt(16)) in C looks strange, it leaves a sqrt(2) in the equation which is awkward when everything else is just fractions. | ||
Ernaine
60 Posts
On June 09 2017 04:13 CecilSunkure wrote: I've been eager to learn about Fourier transform. In particular I wanted to use it to do pitch adjustments of sound samples. I have some code here for playing sounds, and want to add some pitch adjustment stuff. Would anyone mind chatting with me about the mathematics? I was hoping to find someone knowledge about Fourier transforms and their applications that I could bounce a bunch of questions off of. Please PM me if anyone would be so kind! Doing some biophysical modeling and analyzing the (possible) oscillations in noise, I did write some FT code. It's a bit different than pitch adjustments. I wanted some power spectrum of time series data. I used the FFTW C library, which seems the fastest thing you can get to do FT, unless you use a something optimized for a specific architecture/chipset/hardware. It is reasonable straightforward to use, and you can call it up from any language; C/Python/matlab/Julia. It is very much a black box, and the library is so fast because it divides the problem into chunks and uses a mix of several numerical methods, depending on the nature of the problem and the hardware you are running. It is completely opaque. But since it is an industry standard, that's ok. To me the signal processing element of it all was a bit of of a dark art. You need to be an electrical engineer specialized in signal processing to really know how to decide on the parameters you want to use to most effectively convert time series data into frequency series data. Windowing, sampling, spectral leakage, aliases, frequency resolution, and some version of the Heisenberg uncertainty saying you an increase of frequency resolution would inadvertently decrease time resolution, and all kinds of artifacts that might pop up, that all was not very easy to understand 'on the fly'. I still remember the 'convolution in the time domain corresponds to multiplication in the frequency domain', but if I had to explain it right now, I'd fail. In the end, I am a chemist by training, working with mathematicians turned biologists. And signal processing using FT is a big thing in engineering, and scientists just use it as a black box, most of the time. The discreteness also doesn't help, as the continuous math is 'simple' the understand, as long as you are comfortable with the complex plane. But the implications of discreteness, they made it all a bit more confusing. Especially since I never took a course in discrete maths. And I has on a deadline to just get it working. So I didn't have the time to patiently go through a signal processing textbook and try out simple things step by step. That said, for what you are doing. If you transform some sound in the time domain to the frequency domain, you can hit it with some function. Then the frequencies in your signal will chance. When you then convert it back to the time domain, it will be a different sound, as it contains different harmonics/overtones. I guess this is how autotune works, in a way. For the math, I thought this video was best: In the end it is all about projecting the time data onto the complex plane. That's why it uses the sine and cosine. As for applications, it is used all over the place. It is probably one of the most commonly used algorithms around. Everyone with some electronic device, phone, mp3 player, etc, uses it all the time. Sounds, spectrum, analysis/recording/sampling of data, but also data compression. As a scientist, we usually use it when we record a spectrum of a molecule. Instead of getting how much photons it absorbs at each time, we get a fingerprint of which frequencies it absorbs in general. It removes noise, compacts what is happening over a longer period of time, and shows all the info we want to know in a straightforward manner. | ||
D_lux
Hungary60 Posts
On June 09 2017 04:13 CecilSunkure wrote: I've been eager to learn about Fourier transform. In particular I wanted to use it to do pitch adjustments of sound samples. I have some code here for playing sounds, and want to add some pitch adjustment stuff. Would anyone mind chatting with me about the mathematics? I was hoping to find someone knowledge about Fourier transforms and their applications that I could bounce a bunch of questions off of. Please PM me if anyone would be so kind! Check out this guy's youtube channel: https://www.youtube.com/user/ddorran/playlists He has some nice playlists explaining stuff about the fourier transform, discrete fourier transform, Z domain, sampling, zero padding, etc etc... all the things you will need if you are working with sound. Wish I could help you more, but I always understood Fourier Transforms in a very superficial way. If you really want to understand you will really need to go deep into the math, but there are some very good ways to visualize these transforms which help the extremely abstract math. | ||
CecilSunkure
United States2829 Posts
| ||
mozoku
United States708 Posts
On June 09 2017 09:22 Poopi wrote: @JimmyJRaynor: if your university needs more than 3/35 people to pass the year (and it's very likely), your course was very badly designed imho :o. I have a question but I'm not sure if the answer is trivial or if we don't have the answer yet; it's related to probability altho with some CS in it so I'll ask there! Say we build a bayesian model that estimates the odds of a real life event A happening at 95%. But the event happens in real life only once (the results of an election for example). Says A happens as "predicted". So what? Was our estimation accurate? Maybe it actually had 80% chances of happening, but it still happened because it's still a likely event. But it didn't necessarily happen by random chance, especially if the event we are trying to predict is an election and not some randomly generated thing. Thus how can we judge if our model was well suited? I guess if we just want to have our model predict what will happen while minimizing loss and so on, like we often do, then there is no problem. But I feel like there an inherent philosophical/epistemological problem with Bayesian models :/. edit: would us be able to reproduce the event results enough times for it to be statistically significant, allow us to correctly evaluate if our estimation was right? But it still wouldn't be an absolutely precise estimation, and is it even possible to have such a thing? (think about FiveThirtyEight and the likes for context) edit 2 : another "application" of this question would be with smartphones weather predictions! They probably use some kind of bayesian model for that, and they will tell you: "there is 30% chances that it'll rain at this hour." how are we supposed to use this information? Assuming their estimation is roughly correct, a wise choice would be to take an umbrella if doing so is the result of a positive mathematical expectation, because we have the probability of the event... but how can I quantitatively assess how not having an umbrella would be a pain in the ass? Like I can say: "I would feel neutral if I have an umbrella and it is raining, so I assess 0 value to having an umbrella". But to have a rough idea of how painful it'll be not having an umbrella if it happens to rain... I would need to know how much and for how long it would rain! But if they can't say that to us, I can't really put their intel about the weather to good use. It won't ever be the wisest choice :/. For your first question, it doesn't matter if the model is Bayesian or not. Bayesian statistics uses Bayes' theorem to come up with a posterior for the model parameters, but the interpretation your choice of point estimate from the posterior that you're using as your prediction has the same interpretation as a non-Bayesian model. If you want to quantify model "accuracy" (using the term loosely here), a Bayesian model is evaluated with the same metrics as non-Bayesian models (with the exception of metrics that require a posterior). Of course, it's difficult to evaluate the quality of a probability prediction model with a single test point. However, with good modeling, good priors, and a number of test points that large enough to make evaluations sensible (but not large enough where the value of the prior information becomes negligible), Bayesian models will usually outperform most non-Bayesian models. (Disclaimer: I'm making a lot of assumptions in here, but trying to speak generally enough to be useful and carefully enough to stay accurate.) I don't see why you think this is a philosophical problem with Bayesian inference. Bayesian inference isn't really advertised as something that allows you to evaluate models with less test points. It's usually advertised as something that allows you to incorporate prior information to build better models when there's little data available, has nicer interpretations of uncertainty measures, relies somewhat less on parametric assumptions than frequentist statistics, and gives you a full posterior as opposed to point estimates. | ||
Nin54545
8 Posts
On June 10 2017 01:27 Oshuy wrote: Then it is probably too early In this specific case, sin(30), cos(60) and log100(10) are just clever ways of writing 1/2, most sqrt() also disappear. Pretty much everything simplifies nicely ; it looks basically like an equation to write something simple in a complicated way. The weird ones are - 14.661 and 21584, which seem arbitrary. Probably aimed at obtaining a given number as a result, but not an elegant way to do it - D just looks wrong. Probably 512 and not 5*sqrt(2), but even then it should probably read (512*0.5)² instead of 512*(0.5)² (and then D=8) - sqrt(1/2.sqrt(16)) in C looks strange, it leaves a sqrt(2) in the equation which is awkward when everything else is just fractions. ty )))) | ||
Poopi
France12468 Posts
On June 10 2017 05:50 mozoku wrote: For your first question, it doesn't matter if the model is Bayesian or not. Bayesian statistics uses Bayes' theorem to come up with a posterior for the model parameters, but the interpretation your choice of point estimate from the posterior that you're using as your prediction has the same interpretation as a non-Bayesian model. If you want to quantify model "accuracy" (using the term loosely here), a Bayesian model is evaluated with the same metrics as non-Bayesian models (with the exception of metrics that require a posterior). Of course, it's difficult to evaluate the quality of a probability prediction model with a single test point. However, with good modeling, good priors, and a number of test points that large enough to make evaluations sensible (but not large enough where the value of the prior information becomes negligible), Bayesian models will usually outperform most non-Bayesian models. (Disclaimer: I'm making a lot of assumptions in here, but trying to speak generally enough to be useful and carefully enough to stay accurate.) I don't see why you think this is a philosophical problem with Bayesian inference. Bayesian inference isn't really advertised as something that allows you to evaluate models with less test points. It's usually advertised as something that allows you to incorporate prior information to build better models when there's little data available, has nicer interpretations of uncertainty measures, relies somewhat less on parametric assumptions than frequentist statistics, and gives you a full posterior as opposed to point estimates. But my question is: can you know the real probability of the event? | ||
Lebesgue
4541 Posts
On June 10 2017 07:11 Poopi wrote: But my question is: can you know the real probability of the event? With finite amount of data you will never be able to learn the "real" probability of an event. What you obtain using statistical methods is always an estimate. If you read scientific articles that use statistical analysis they will always report both point-estimates as well as standard deviations, confidence intervals or posterior belief distribution to measure how "precise" is their reported point-estimate. | ||
HKTPZ
105 Posts
On June 10 2017 07:11 Poopi wrote: But my question is: can you know the real probability of the event? Supposed we knew everything about how everything works, then yes, in that case we would be able to know the real probability (which would be either 0 or 1). But at the end of the day we know very little about how everything works and we may not even truly realize how little we know. Trying to predict something like an election or the weather comes down to simplified models over the factors we think influence the outcome - then the models can be reevaluated; how the factors are weighed can be adjusted etc | ||
fishjie
United States1519 Posts
| ||
Ernaine
60 Posts
On June 10 2017 07:11 Poopi wrote: But my question is: can you know the real probability of the event? In science (ie: not mathematics), nothing is absolute. Even the most mundane and predictable of events; we cannot know anything for 100%. For example, when I throw a dice, we cannot know that there is a zero chance of the dice not shattering on impact. Yes, we can calculate the forces involved. But who is to say that for the first time ever, the laws of nature won't suddenly change? In the same way, if a dice truly gives a 50/50 for odds vs even, you only get to exactly 50/50 going to infinity. In fact, it is completely impossible to get 50/50 if you throw a dice an odd number of times. The reason we know a dice is 50/50 to be odd or even if because we do exactly know the number of sides it has. The assumption then is that the dice if perfectly fair. Which probably isn't the case. If you are really bored, you can take a bunch of dice, or coins, throw/flip each of them an absurd number of times. Then calculate how likely the outcome you got is under the assumption that the dice/coin is fair. In principle, any imperfection of flaw will mean the dice/coin isn't perfectly symmetrical. And thus it can in principle lead to a biased die. So in the case of an unfair dice/coin, there is no way to know with 100% accuracy that the exact probabilities are. The law of large numbers can get you far enough. Far enough for any real-world applications. (now if you really need to throw a lot of times, the dice/coin may actually wear and it's fairness may chance as a function of the number of throws, adding another layer of complexity.) So it is mainly a philosophical debate. That said, you probably control the outcome of the dice/coin for a 100% by 'deciding' how you throw it. But I don't know of any magicians that have enough skill to throw a dice in a certain way so it gets them the outcome they want. | ||
hypercube
Hungary2735 Posts
On June 10 2017 07:43 fishjie wrote: Yeah probability is a way to estimate the likelihood of something happening without complete information. If we had precise knowledge of how you flip the coin - initial starting position, angle of the flick, force of the flick, exact dimensions of the coin and its weight, wind conditions, and who knows how many other parameters, the probability of heads is not 1/2. You'd be able to build a physical model to know the exact answer. It'd be 0 or 1. At that point its purely deterministic. And even then it would be an exact answer under the assumption that the physical model is completely accurate. | ||
fishjie
United States1519 Posts
On June 10 2017 07:45 Ernaine wrote: In science (ie: not mathematics), nothing is absolute. Even the most mundane and predictable of events; we cannot know anything for 100%. For example, when I throw a dice, we cannot know that there is a zero chance of the dice not shattering on impact. Yes, we can calculate the forces involved. But who is to say that for the first time ever, the laws of nature won't suddenly change? Ah fair point, related reading to that point: https://en.wikipedia.org/wiki/Sunrise_problem | ||
HKTPZ
105 Posts
| ||
Poopi
France12468 Posts
On June 10 2017 07:45 Ernaine wrote: In science (ie: not mathematics), nothing is absolute. Even the most mundane and predictable of events; we cannot know anything for 100%. For example, when I throw a dice, we cannot know that there is a zero chance of the dice not shattering on impact. Yes, we can calculate the forces involved. But who is to say that for the first time ever, the laws of nature won't suddenly change? In the same way, if a dice truly gives a 50/50 for odds vs even, you only get to exactly 50/50 going to infinity. In fact, it is completely impossible to get 50/50 if you throw a dice an odd number of times. The reason we know a dice is 50/50 to be odd or even if because we do exactly know the number of sides it has. The assumption then is that the dice if perfectly fair. Which probably isn't the case. If you are really bored, you can take a bunch of dice, or coins, throw/flip each of them an absurd number of times. Then calculate how likely the outcome you got is under the assumption that the dice/coin is fair. In principle, any imperfection of flaw will mean the dice/coin isn't perfectly symmetrical. And thus it can in principle lead to a biased die. So in the case of an unfair dice/coin, there is no way to know with 100% accuracy that the exact probabilities are. The law of large numbers can get you far enough. Far enough for any real-world applications. (now if you really need to throw a lot of times, the dice/coin may actually wear and it's fairness may chance as a function of the number of throws, adding another layer of complexity.) So it is mainly a philosophical debate. That said, you probably control the outcome of the dice/coin for a 100% by 'deciding' how you throw it. But I don't know of any magicians that have enough skill to throw a dice in a certain way so it gets them the outcome they want. You don't only get to exactly 50/50 going to infinity, you have 50% chances of having 50/50 if you throw it 2 times. And I know the definition of probability with infinity and such, that's not really my question. About the dice example, it has less chances of giving 6 than 1 afaik because there are less holes in 1, but again it's not my question :/. And I'm not talking about dices at all because dices are pretty well random. What I'm talking about is the probability of real life events that are a priori not random: are we still stuck at determinism issues? | ||
Deleted User 3420
24492 Posts
| ||
Ernaine
60 Posts
But if we throw only 2 times and get both results once, the hypothesis that the true nature of the dice is that we get 10% heads and 90% tails is still somewhat in agreement with the data. And that is quite a bit different from the 50/50 Yes, after 2 trials we do get exactly 50/50, and we can get similar results at 4 trials and all the other even trials, but we don't know with a high probability that the coins for sure are in fact 50/50 completely fair coins. So yes, we need to go to infinity to exactly get 50/50. You can try it with a computer. (yes, it will have pseudo-random numbers, so it is still a bit iffy, just like having a completely fair dice | ||
| ||