|
On June 24 2017 03:54 ninazerg wrote: Does 1 + 1 ever not equal 2? Usually, one defines 2 to be 1+1. In this sense, 1+1=2 always holds.
But you can have 1+1=0 for example, in which case 2=0 (so still 1+1=2). In this case, ifthere is already some object taking the role of "2", one typically writes 0 instead of 2, to have a unambiguous notation.
|
No. 1+1=10 in many many daily cases
|
On June 24 2017 04:13 Mafe wrote:Usually, one defines 2 to be 1+1. In this sense, 1+1=2 always holds. But you can have 1+1=0 for example, in which case 2=0 (so still 1+1=2). In this case, ifthere is already some object taking the role of "2", one typically writes 0 instead of 2, to have a unambiguous notation.
Indeed. In the realm of natural numbers (or in the reals, or for rationals, or a lot of other realms), 1 + 1 = 2.
However, it is quite easy to conceive of an mathematical object worth study where that is not the case. Take, for example, the F2. That is the body of numbers modulo 2. In that body, there are only two objects, everything that is 0 modulo 2, and everything that is congruent 1 modulo 2. (One could of course also call these different.
It is quite obvious that 1+1 in that body equals, because 2 is equal to zero in that body.
|
On June 24 2017 01:33 AbouSV wrote: So in the end, Travis made this thread to solve some homework questions :D
in all fairness, it was a really hard question, lol
it took a long time but Shala helped me solve it over skype.
|
On June 24 2017 03:54 ninazerg wrote: Does 1 + 1 ever not equal 2?
1+1=3 for especially large values of 1
+ Show Spoiler +Merely just a joke about rounding numbers... typically, 1.3 would be rounded down to 1 while 2.6 would be rounded up to 3, so while 1.3+1.3=2.6, individually rounding those three numbers would create the incorrect estimation that 1+1=3.
|
On June 24 2017 12:10 DarkPlasmaBall wrote:1+1=3 for especially large values of 1 + Show Spoiler +Merely just a joke about rounding numbers... typically, 1.3 would be rounded down to 1 while 2.6 would be rounded up to 3, so while 1.3+1.3=2.6, individually rounding those three numbers would create the incorrect estimation that 1+1=3.
Not so much a joke, just showing you're a physicist :p Coming from math, I had a real hard time at first understanding the shenanigans behind 'pi = 1' or '1000+200 = 1000' and such.
|
On June 24 2017 03:54 ninazerg wrote: Does 1 + 1 ever not equal 2?
1 + 1 = 0 (Mod 2)
And I guess looking above, lots of times.
If you can't post picures, you need to add .png to the image link.
|
FREEAGLELAND26780 Posts
Math thread \o/
I've forgotten so much math already
|
On June 24 2017 18:13 AbouSV wrote:Show nested quote +On June 24 2017 12:10 DarkPlasmaBall wrote:On June 24 2017 03:54 ninazerg wrote: Does 1 + 1 ever not equal 2? 1+1=3 for especially large values of 1 + Show Spoiler +Merely just a joke about rounding numbers... typically, 1.3 would be rounded down to 1 while 2.6 would be rounded up to 3, so while 1.3+1.3=2.6, individually rounding those three numbers would create the incorrect estimation that 1+1=3. Not so much a joke, just showing you're a physicist :p Coming from math, I had a real hard time at first understanding the shenanigans behind 'pi = 1' or '1000+200 = 1000' and such.
I feel just the opposite. I get annoyed when people are working with rough estimates but use a value of pi that is correct to 5 decimal places. I know that's what some textbooks tell you to do, but come on people, use common sense.
|
So, this is kind of a crossover between CS and math. Not sure which thread to post this in. I guess math, but hopefully some people here understand it.
I have a boolean formula (A ^ B) ^ (C V D) V ( B ^ D) ^ ( ~C V ~D) etc etc We are looking for the asymptotic complexity of finding satisfiability(a combination of true and falses that makes the final output true), through brute force. The question is how many combinations of inputs are there, basically. I believe the answer is 2^n.
The 2nd part of the question says that we now have an ability to pass our algorithm both the conditions (A = true, B = false, C = true, etc etc - in our 2^n combinations) - but we can ALSO pass an integer "k". Our algorithm will return if we have satisfied the assignment, but will ALSO return if the boolean formula can be satisfied with "k" or less TRUE values.
The question for the 2nd part is what is the asymptotic complexity to find a satisfying assignment. Here is what I believe the answer is.
First, find the exact "k" value. This can be done by passing "k = n", then if it comes back true, cut "k" in half, pass it again. If it's false cut it in half between k and n, if it's true again cut it in half again.. etc etc. Keep dividing our portions by half until we get an exact k. This is log_2(n) operations.
Once we have an exact k, we know we need to pass that many trues into our algorithm. Therefore the total number of combinations to try will be (n choose k).
So I think the complexity this time will be log_2(n) + (n choose k), which asymptotically is just n choose k - which in the average case I think is much better than the first scenario.
Does this math look good? I just wanted to see if anyone spots any obvious mistakes.
|
On June 24 2017 22:32 hypercube wrote:Show nested quote +On June 24 2017 18:13 AbouSV wrote:On June 24 2017 12:10 DarkPlasmaBall wrote:On June 24 2017 03:54 ninazerg wrote: Does 1 + 1 ever not equal 2? 1+1=3 for especially large values of 1 + Show Spoiler +Merely just a joke about rounding numbers... typically, 1.3 would be rounded down to 1 while 2.6 would be rounded up to 3, so while 1.3+1.3=2.6, individually rounding those three numbers would create the incorrect estimation that 1+1=3. Not so much a joke, just showing you're a physicist :p Coming from math, I had a real hard time at first understanding the shenanigans behind 'pi = 1' or '1000+200 = 1000' and such. I feel just the opposite. I get annoyed when people are working with rough estimates but use a value of pi that is correct to 5 decimal places. I know that's what some textbooks tell you to do, but come on people, use common sense.
I still agree. I mean pi at 5 decimals is as bad as taking 1 or 10. You can't replace pi by anything, just keep it. This has been the topic of many discussions at the (physics) lab :p
|
On June 24 2017 23:51 AbouSV wrote:Show nested quote +On June 24 2017 22:32 hypercube wrote:On June 24 2017 18:13 AbouSV wrote:On June 24 2017 12:10 DarkPlasmaBall wrote:On June 24 2017 03:54 ninazerg wrote: Does 1 + 1 ever not equal 2? 1+1=3 for especially large values of 1 + Show Spoiler +Merely just a joke about rounding numbers... typically, 1.3 would be rounded down to 1 while 2.6 would be rounded up to 3, so while 1.3+1.3=2.6, individually rounding those three numbers would create the incorrect estimation that 1+1=3. Not so much a joke, just showing you're a physicist :p Coming from math, I had a real hard time at first understanding the shenanigans behind 'pi = 1' or '1000+200 = 1000' and such. I feel just the opposite. I get annoyed when people are working with rough estimates but use a value of pi that is correct to 5 decimal places. I know that's what some textbooks tell you to do, but come on people, use common sense. I still agree. I mean pi at 5 decimals is as bad as taking 1 or 10. You can't replace pi by anything, just keep it. This has been the topic of many discussions at the (physics) lab :p
It depends on *why* your error is what it is. If you have a 10% error because that's the limit of your instrument and there's no reasonable way to improve it, then sure use the best value of pi you can. But if you have a 10% error because you are just looking for a ballpark figure then what's the point? You might as well put that extra effort into improving your measurement or your model or whatever.
There's a difference between measurements and ballpark estimates. Both have errors, but when you estimate you create errors in order to keep moving along and get to some reasonable number as fast as possible. Saying that pi ~ 3.14 is no different than saying 900 + 95 ~= 1000
|
On June 24 2017 23:50 travis wrote: So, this is kind of a crossover between CS and math. Not sure which thread to post this in. I guess math, but hopefully some people here understand it.
I have a boolean formula (A ^ B) ^ (C V D) V ( B ^ D) ^ ( ~C V ~D) etc etc We are looking for the asymptotic complexity of finding satisfiability(a combination of true and falses that makes the final output true), through brute force. The question is how many combinations of inputs are there, basically. I believe the answer is 2^n.
The 2nd part of the question says that we now have an ability to pass our algorithm both the conditions (A = true, B = false, C = true, etc etc - in our 2^n combinations) - but we can ALSO pass an integer "k". Our algorithm will return if we have satisfied the assignment, but will ALSO return if the boolean formula can be satisfied with "k" or less TRUE values.
The question for the 2nd part is what is the asymptotic complexity to find a satisfying assignment. Here is what I believe the answer is.
First, find the exact "k" value. This can be done by passing "k = n", then if it comes back true, cut "k" in half, pass it again. If it's false cut it in half between k and n, if it's true again cut it in half again.. etc etc. Keep dividing our portions by half until we get an exact k. This is log_2(n) operations.
Once we have an exact k, we know we need to pass that many trues into our algorithm. Therefore the total number of combinations to try will be (n choose k).
So I think the complexity this time will be log_2(n) + (n choose k), which asymptotically is just n choose k - which in the average case I think is much better than the first scenario.
Does this math look good? I just wanted to see if anyone spots any obvious mistakes.
Soon we are going to need the Algorithms Thread.
I'm not sure I understand part 2, do you mean you can choose k out of n inputs arbitrarily? Or the first k of n?
|
In part 2 you can pass an integer "k". in addition to your series of true/false.
It then tells you if the formula is satisfiable with "k" true values or less. So if your formula needs 10 true values, and you pass it a "k" value of 9, it will return false for that.
It's not tied to the actual series of boolean variables you are sending, other than that they are both related to solving the formula. They could be 2 different functions for all it matters.
|
On June 25 2017 10:37 travis wrote: In part 2 you can pass an integer "k". in addition to your series of true/false.
It then tells you if the formula is satisfiable with "k" true values or less. So if your formula needs 10 true values, and you pass it a "k" value of 9, it will return false for that.
It's not tied to the actual series of boolean variables you are sending, other than that they are both related to solving the formula. They could be 2 different functions for all it matters.
Are there any restrictions on the formula? The example is (A op B) op (...) ..., so it's always the same pattern or brackets? Not sure about this off the top of my head. Naive solution is to check everything at O(2^n) but if you analyse the formula itself, shouldn't it be something is O(n) because you just count the operations in a certain way? Eg, (n-k) equals the number of ANDs or some such.
Edit: The problem I have with your original solution is that you are looking at the complexity of how many function calls to f(n, k) as a binary search on k. But I am wondering if the question is to figure out the complexity of f, how it determines what to return for a given k.
|
in all my years in academic mathematics, this is the best math problem i have ever encountered. so simple yet so frustatingly elusive. And the answer is elegant and massively satisfying.
problem: prove that 1+1=2 without using addition and numbers in your process and solution.
anybody want to have a go?
|
On June 25 2017 14:17 xwoGworwaTsx wrote: in all my years in academic mathematics, this is the best math problem i have ever encountered. so simple yet so frustatingly elusive. And the answer is elegant and massively satisfying.
problem: prove that 1+1=2 without using addition and numbers in your process and solution.
anybody want to have a go?
What's your area of mathematics?
|
On June 25 2017 14:17 xwoGworwaTsx wrote: in all my years in academic mathematics, this is the best math problem i have ever encountered. so simple yet so frustatingly elusive. And the answer is elegant and massively satisfying.
problem: prove that 1+1=2 without using addition and numbers in your process and solution.
anybody want to have a go? Peano arithmetic defines numbers as literals in a FOL. Does that count as "using numbers"? Otherwise you could do it in set theory, but it's basically the same.
|
Could someone help me with summation algebra
I keep getting problems where I am summing to some portion of n, say
the sum from i=1 to (n/2) of i
now, I know that the sum i=1 to n of i is gauss sum: 1/2n(n+1)
I know this because I have memorized it
but when we sum to n/2 it becomes 1/8n(n+2)
I know this because wolfram alpha tells me so
but how can I manually solve this sum? Is there some rule for what happens to the sum when I only sum to a portion of my "n" ?
|
Another way to put it:
let f(x) = sum from i=1 to x of i let g(x) = (1/2) * x * (x+1)
you know that f(x) = g(x) for all x.
what you want is f(n/2) which is equal to g(n/2)
so if you just plug n/2 instead of x in g, you get
g(n/2)=(1/2) * (n/2) * (n/2+1) = (n/4)*(n+2)/2 = n(n+2)/8
|
|
|
|