|
On February 23 2018 23:34 Joni_ wrote: Random pure maths question:
What are sufficient conditions on a Banach Algebra to guarantee that an element whose spectrum is a subset of {0,1} is a projection, i.e., self-adjoint and idempotent?
The Algebra being a C*-Algebra is certainly sufficient, but I'm specifically looking for weaker conditions, because C* is not guaranteed in my case.
Look into what happens to the fixed orbit of elements acting on rational points under the Frobenius automorphism.
|
On February 24 2018 08:41 KR_4EVR wrote:Show nested quote +On February 23 2018 23:34 Joni_ wrote: Random pure maths question:
What are sufficient conditions on a Banach Algebra to guarantee that an element whose spectrum is a subset of {0,1} is a projection, i.e., self-adjoint and idempotent?
The Algebra being a C*-Algebra is certainly sufficient, but I'm specifically looking for weaker conditions, because C* is not guaranteed in my case. Look into what happens to the fixed orbit of elements acting on rational points under the Frobenius automorphism. Care to elaborate? I only know the Frobenius automorphism as an automorphism of fields of prime characteristic.
Tbh Im not particularly knowledgeable in a lot of advanced C*-topics like K-Theory in general, it's mostly spectral theory/Gelfand theory for me, so might be that I simply lack the required conceptual background. :>
|
On February 24 2018 09:14 Joni_ wrote:Show nested quote +On February 24 2018 08:41 KR_4EVR wrote:On February 23 2018 23:34 Joni_ wrote: Random pure maths question:
What are sufficient conditions on a Banach Algebra to guarantee that an element whose spectrum is a subset of {0,1} is a projection, i.e., self-adjoint and idempotent?
The Algebra being a C*-Algebra is certainly sufficient, but I'm specifically looking for weaker conditions, because C* is not guaranteed in my case. Look into what happens to the fixed orbit of elements acting on rational points under the Frobenius automorphism. Care to elaborate? I only know the Frobenius automorphism as an automorphism of fields of prime characteristic. Tbh Im not particularly knowledgeable in a lot of advanced C*-topics like K-Theory in general, it's mostly spectral theory/Gelfand theory for me, so might be that I simply lack the required conceptual background. :>
What I meant is you need to formulate a injective but not surjective map (or measure) from your Banach space to an appropriate field with more restrictive topology. Study what happens in that case; it will illuminate the full problem. This is a common strategy. Look at Tykhonov's theorem.
|
heeelp, oral math exam incoming :p
(EDIT: I think I pretty much understand this, for now. I know that this theorem will be used in later proofs. I'm not there yet.)
I'm trying to understand the differences between the fundamental theorem of calculus (complex) and Cauchy's integral theorem. My problem is that the fundamental theorem seems more general and stronger than Cauchy's. I'm not getting the gist of Cauchy's.
(after an hour of thinking and editing this post: ) Some thoughts: Cauchy's integral theorem tells us for which kind of domain a holomorphic function will definitely have an antiderivative. When you just look at the fundamental theorem, it's unclear if f(z)=1/z could ever have an antiderivative, even though it's holomorphic. But with Cauchy it's clear that it will have an antiderivative on a simply connected domain.
PS: I hope I'm making enough sense here. My "math english" is terrible :/
|
Your additional thoughts are exactly on point. The fundamental theorem talks about functions which have a holomorphic antiderivative as a base, which is not the same as holomorphic functions.
Cauchy's integral theorem states that holomorphic functions do stuff. It is thus stronger in that regard.
Meanwhile, as you noticed, once you know a holomorphic antiderivative exists, the fundamental theorem becomes stronger.
|
On March 10 2018 07:43 KR_4EVR wrote:Show nested quote +On February 24 2018 09:14 Joni_ wrote:On February 24 2018 08:41 KR_4EVR wrote:On February 23 2018 23:34 Joni_ wrote: Random pure maths question:
What are sufficient conditions on a Banach Algebra to guarantee that an element whose spectrum is a subset of {0,1} is a projection, i.e., self-adjoint and idempotent?
The Algebra being a C*-Algebra is certainly sufficient, but I'm specifically looking for weaker conditions, because C* is not guaranteed in my case. Look into what happens to the fixed orbit of elements acting on rational points under the Frobenius automorphism. Care to elaborate? I only know the Frobenius automorphism as an automorphism of fields of prime characteristic. Tbh Im not particularly knowledgeable in a lot of advanced C*-topics like K-Theory in general, it's mostly spectral theory/Gelfand theory for me, so might be that I simply lack the required conceptual background. :> What I meant is you need to formulate a injective but not surjective map (or measure) from your Banach space to an appropriate field with more restrictive topology. Study what happens in that case; it will illuminate the full problem. This is a common strategy. Look at Tykhonov's theorem. Thanks for the reply, I didnt even notice until today. Yet there seems to be some communication mishap (I guess I just dont quite get what you mean):
There will never be an injective map, that is also an algebra homomorphism from a general Banach Algebra into any field, as a Banach Algebra is not in general an integral domain while a field is.
A map that is not an algebra homomorphism will not allow me to deduce any algebraic properties of the elements of an algebra (like idempotence) from their images, so I dont see how that could help.
As far as I can tell Tykhonov's Theorem seems to be completely unrelated to my case, but maybe Im just not seeing the proper broader picture.
|
Hey Ciaus sorry for not replying, I got deep into my work. I will tell you I did use what you provided, but the algorithm itself did not pan out.
but anyways here I am with a new question for you math geniuses
I have written an algorithm that works with graphs. The long and short of it is, I arrive at a point where I have k sets, each has unique values within them. The sets vary in size.
So maybe I have 8 sets: three are size 1, two are size 2, two are size 4, and one of them is size 9.
I have to separate these k sets into n sets, where n is a value that is fixed ahead of time. Let's say n is 18.
Well, using basic math I can see that I have 3+4+8+9 = 24 values to work with. 24/18 = 1.3333.. average values per set, in order to form n sets.
Now here is where things get tough. The k sets are formed as such because they are disjoint, and the values must remain disjoint. This means that each set of size one, must remain size one. In a similar fashion, any of the other sets cannot be "mixed" with values from another set. So, sets of size two could never form a new set of size 3. Nor could a value from the set of size 9 be put into a set with a value from a set of size 4 to form a set of size 2. Essentially, values can only form a new set with other values they were originally matched with.
Using this information, is it possible to actually solve what the sizes of my n disjoint sets must be?
DISCLAIMER: The above example is not a real example and likely does not have an actual solution. If needed, I could run my algorithm to get a real world example.
|
If I understand the problem correctly then there are obviously only solutions for n greater or equal than k. Cant you just sort the k sets in ascending order and pick single elements to form new size 1 sets until you have as many sets as you want? Now calculating the sizes of the new sets shouldnt be a problem. (You have all untouched old sets, some new size 1 sets and maybe 1 half-used old set).
|
@travis: awesome, glad it was of some help, sorry things didn't work =/
For the new problem, is there any requirement other than simply the number of new sets you must have? If not, then @Kleinmuuhg's answer looks perfect. If you want to keep sets small-ish maybe, you can just subdivide up the largest sets first, until you have n in total.
|
Also while I'm here, here's a fun thought-experiment that uses a bit of basic graph theory and probability intuition:
Say you are running some NGO which delivers vaccines to countries suffering epidemics. You only have enough vaccines to vaccinate some small portion of the population (say less than 5%), and you can't gather any data on the population / distribution / individuals in the country, all you can do is randomly give people the sealed vaccine, and some instructions. You may assume the people you give the vaccine to will follow your instructions.
What instructions do you give to try most effectively stem the epidemic?
Rough solution: + Show Spoiler + With varying extra details and depth, the very basic idea is as follows: You instruct the person you give the vaccine to not to take it themselves, but rather to give it to a friend, who should then take it. Those with more friends are hence more likely to get vaccinated. The assumption being that a person's capacity to spread the infection is proportional to how much they interact with others in their surroundings. In essence, the strategy targets those with greatest ability to spread infection.
Explicitly modeled, your population is a graph. People = vertices, friendships = edges. More incident edges, more likelihood of receiving vaccine from a friend as well as higher expected amount of infection spreading.
|
On May 17 2018 04:57 Kleinmuuhg wrote: If I understand the problem correctly then there are obviously only solutions for n greater or equal than k. Cant you just sort the k sets in ascending order and pick single elements to form new size 1 sets until you have as many sets as you want? Now calculating the sizes of the new sets shouldnt be a problem. (You have all untouched old sets, some new size 1 sets and maybe 1 half-used old set).
On May 17 2018 05:28 Ciaus_Dronu wrote: For the new problem, is there any requirement other than simply the number of new sets you must have? If not, then @Kleinmuuhg's answer looks perfect. If you want to keep sets small-ish maybe, you can just subdivide up the largest sets first, until you have n in total.
yes, n will always be >= k
and I failed to mention the stipulation that every element in all of our sets must be used (and every element is unique, you will not see the same value twice).
in terms of the actual problem, I am not looking for what values would go in what set. that actually has an entirely new set of constraints. so, what I am hoping for is "potential solutions" in terms of the amount of sets and the sizes of them. but I think for the answer you are giving this doesn't matter.
anyways let me see if my understanding of the suggested solution is correct:
we have M elements in K sets, trying to form N sets of size Y
#at each step, we can stop and use this solution and/or backtrack to find another solution 1.) form N sets of size 1, first using the sets of size 1, and then using at least one value from each of the other sets 2.) kind of like... fill in these sets with the remaining values? potentially a huge amount of ways they could be put in... is it the case that every time we do #2 it will give us a potential solution?
|
On May 17 2018 05:40 Ciaus_Dronu wrote:Also while I'm here, here's a fun thought-experiment that uses a bit of basic graph theory and probability intuition: Say you are running some NGO which delivers vaccines to countries suffering epidemics. You only have enough vaccines to vaccinate some small portion of the population (say less than 5%), and you can't gather any data on the population / distribution / individuals in the country, all you can do is randomly give people the sealed vaccine, and some instructions. You may assume the people you give the vaccine to will follow your instructions. What instructions do you give to try most effectively stem the epidemic? Rough solution: + Show Spoiler + With varying extra details and depth, the very basic idea is as follows: You instruct the person you give the vaccine to not to take it themselves, but rather to give it to a friend, who should then take it. Those with more friends are hence more likely to get vaccinated. The assumption being that a person's capacity to spread the infection is proportional to how much they interact with others in their surroundings. In essence, the strategy targets those with greatest ability to spread infection.
Explicitly modeled, your population is a graph. People = vertices, friendships = edges. More incident edges, more likelihood of receiving vaccine from a friend as well as higher expected amount of infection spreading.
instructions: kill anyone who is infected
ok no but actually I would have never gotten the recommended solution, which is actually pretty damn clever
|
On May 17 2018 06:55 travis wrote:Show nested quote +On May 17 2018 04:57 Kleinmuuhg wrote: If I understand the problem correctly then there are obviously only solutions for n greater or equal than k. Cant you just sort the k sets in ascending order and pick single elements to form new size 1 sets until you have as many sets as you want? Now calculating the sizes of the new sets shouldnt be a problem. (You have all untouched old sets, some new size 1 sets and maybe 1 half-used old set). Show nested quote +On May 17 2018 05:28 Ciaus_Dronu wrote: For the new problem, is there any requirement other than simply the number of new sets you must have? If not, then @Kleinmuuhg's answer looks perfect. If you want to keep sets small-ish maybe, you can just subdivide up the largest sets first, until you have n in total. yes, n will always be >= k and I failed to mention the stipulation that every element in all of our sets must be used (and every element is unique, you will not see the same value twice). in terms of the actual problem, I am not looking for what values would go in what set. that actually has an entirely new set of constraints. so, what I am hoping for is "potential solutions" in terms of the amount of sets and the sizes of them. but I think for the answer you are giving this doesn't matter. anyways let me see if my understanding of the suggested solution is correct: we have M elements in K sets, trying to form N sets of size Y #at each step, we can stop and use this solution and/or backtrack to find another solution 1.) form N sets of size 1, first using the sets of size 1, and then using at least one value from each of the other sets 2.) kind of like... fill in these sets with the remaining values? potentially a huge amount of ways they could be put in... is it the case that every time we do #2 it will give us a potential solution?
Hmm, there are actually a huge number of ways you may be able to end up doing this. If n >> k, and some of your original k sets are themselves large, you end up with something worse than set-partitions in terms of how many ways you can break it up.
But let's be explicit. Let m be the total number of (unique) numbers in our k sets. Let n be the target number of sets.
Constraints: There exists at least one way of getting our m numbers into n new sets if and only if:
k <= n <= m
Proof if you want: + Show Spoiler + Only if: The first inequality because you can never merge old sets, only break them up or keep them the same. The second inequality because every set needs at least 1 number in it, so you can't have fewer total numbers than sets.
if: If both inequalities hold, and n = k, then the sets as they are given are fine. If k < n, choose any set with more than 1 element (which must exist as k < m) and break it into 2 sets. We've now increased "k" by 1. Keep doing this until "k" = n. (I say "k" as we've essentially done induction to get a new value for the number of sets we already have).
Horror of counting all possible solutions The issue is that there are a huge number of ways to break up your k old sets into n new ones (every old set will just be some subset of an old one). Finding even a rough solution is nightmarish at best. Let's give an example to illustrate. Say k = 3, n = 5, m = 15. Let our 3 starting sets be A, B, C with |A| = |B|= |C| = 5.
To get to 5 sets from these 3, we could partition C into 3 subsets, and now we'd have 5 new sets, done. Call the number of ways of doing this {5, 3}. I think it's called the Stirling number of the second kind, and it's generally large. We could instead, however, partition just A or just B into 3. But... We could rather partition A and B into 2 and leave C as is, that'd also give us 5 new sets. Same for all pairings. As you can see this gets... well I'd never tackle this as a research problem unless I wanted to be only speaking in generating functions by the end of it.
Easy way to get any solution All hope is not lost though. If ALL you care about is finding some way of turning your k sets into n sets, with those same elements, by only breaking sets up and never merging them, then this is easy. First, check that our first inequality holds (otherwise no solution exists). Second, Check if k=n, if so, we are done. If not, take the largest set you have. Cut off 1 single element and make a new set with just that element. You now have k+1 sets. Find the new largest set (it might not be the same as in the previous step) and again, take off 1 element and make that element its own set. Keep doing this until you have n sets.
In our previous example with A, B, C, the algorithm would take 1 element from, say, A, at step 1. Then another element from say B (NOT A, as it is smaller than B after we chopped an element off of it) at step 2. Then you'd be left with C, two sets of size 4, and two sets of size 1.
|
That last part of ciaus_dronus answer is basically what I suggested. I think he explained it pretty good. If needed I can type up some pseudo code if you want
|
Agreed. I don't actually think the problem is that hard at all. You just need a principled way to run through your sets. Given that you have some kind of condition for what elements can be inside a set, which you haven't specified, you can simply iterate through this generation process, and stop when you find an acceptable solution. Greedy algorithms work!
Of course, if you can somehow use your acceptability criterion to restrict the search space, then you should. But if you can't and can only decide whether a solution is acceptable or not, then just iterate greedily until you find an acceptable partition.
Oh, of course, if you have a very tiny number of actually acceptable solutions, or you are looking not just for any solution, but for the best solution, then this is not going to work as well. But for just finding *a* solution, greedy is fantastic.
|
@travis:
If your additional constraints involve a specific ordering of the elements within the sets and how they can be divided according to that, I thought of an easy way to both count and generate possible partitions. Basically, if you can "linearise" the problem a bit and get rid of partitions that don't look like intervals when you give the elements some order, it's not too hard to count all possible partitions, and to see how you get them.
If this is of value to you, I can type it up. But it's a very specific constraint so maybe not =/
The gist is to model covering in the order relation inside each partition as edges between values (ie: edge between u and v if v is the smallest value bigger than u). Then you just delete any n-k edges and get a graph on n components corresponding exactly to an allowed partition into n sets.
|
I'm having an issue (dispute) with someone and need some clarity, as my brain is fuzzing whenever I try to properly prove it one way or the other.
The core issue is this: trying to calculate the expected cost of something, in the context of a game.
basically; there's a yearly chance that the event might happen if you pay a fee. say X% a year. and each year, you have to pay K. i.e. each year you pay K, and you get an X% chance of the event.
now there's a modifier N, which can sometimes affect this, and i'm trying to figure out how the expected cost varies as a function of N, roughly. it doesn't have to be exactly right, as the dispute is simply about the scaling rate.
N typically has values near 1, with the baseline case being where N =1; it always has positive value I think. all values are between 0.8 and 1.5
N has the following effects: it causes the yearly cost to be K*N and it modifies the yearly chance of the event happening; I think to X*(1/N) though it might be to X*[2-N] which unfortunately is a whole different calculation, though they're fairly similar over the range of values actually used.
|
You are not being 100% clear. Are the following true?
If you pay, event might happen, otherwise it doesn't. Event is positive, and you with to know how much the average cost of the event is, without caring how long it takes for the event to happen? So an example would be a lottery, where you pay a monthly fee for a change of winning a cruise.
For the base case, the expected cost of the event is K/p, where K is your cost and p is the probability of the event happening (The percentage X you declared, divided by 100). The reason for that is that you expect on average p events happening per year that you pay (It doesn't matter that p is smaller than 1). So you pay K for p events, thus the cost of one event is K divided by p.
The same logic still works in your modified case. Through a change of N, the cost changes to K*N, and the amount of events per year is changed to p/N. Thus, you pay K*N for p/N events, and thus each event costs you K*N divided by p/N. Fractional maths means that this is equal to K*N*N/p. So in this case, the total costs is K*N² / p
In the second case, a similar argumentation leads to K*N / (p*(2-N))
|
On May 25 2018 09:06 Simberto wrote: You are not being 100% clear. Are the following true?
If you pay, event might happen, otherwise it doesn't. Event is positive, and you with to know how much the average cost of the event is, without caring how long it takes for the event to happen? So an example would be a lottery, where you pay a monthly fee for a change of winning a cruise.
For the base case, the expected cost of the event is K/p, where K is your cost and p is the probability of the event happening (The percentage X you declared, divided by 100). The reason for that is that you expect on average p events happening per year that you pay (It doesn't matter that p is smaller than 1). So you pay K for p events, thus the cost of one event is K divided by p.
The same logic still works in your modified case. Through a change of N, the cost changes to K*N, and the amount of events per year is changed to p/N. Thus, you pay K*N for p/N events, and thus each event costs you K*N divided by p/N. Fractional maths means that this is equal to K*N*N/p. So in this case, the total costs is K*N² / p
In the second case, a similar argumentation leads to K*N / (p*(2-N)) ok i'll try to clarify, though it looks like you got it all anyways.
all of those listed things are true, yes.
ok, that looks fine.
|
obviously have to specify that N <= 2 for the second case, because probabilities cannot be negative.
moreover, that is right for the base case, where you know that N is applied or not. In order to compute the general case, where you cannot say a priori whehter N is active or not, you need to know the probability that the modifier N is applied.. The general case will also allow you to compute the probability of the event occurring in at most Y years, and things like that.
|
|
|
|