Thread Rules 1. This is not a "do my homework for me" thread. If you have specific questions, ask, but don't post an assignment or homework problem and expect an exact solution. 2. No recruiting for your cockamamie projects (you won't replace facebook with 3 dudes you found on the internet and $20) 3. If you can't articulate why a language is bad, don't start slinging shit about it. Just remember that nothing is worse than making CSS IE6 compatible. 4. Use [code] tags to format code blocks.

On March 18 2017 08:26 WarSame wrote: Does anyone know where to find longer video footage of autonomous cars in service? I can only find promotional videos so far. I'm looking for an hour+ of straight driving to seoe what it is like. On a related note, where is the state of the tech at right now? Last I checked the consensus was that driving itself had been mostly figured out, but it still had problem with poor weather conditions, such as the Lidar not working. Is this still the case?

Anyone who knows where the cutting edge is at can't actually tell you (due to nda). You just kinda have to guess based on what waymo said last and extrapolate.

On March 18 2017 08:26 WarSame wrote: Does anyone know where to find longer video footage of autonomous cars in service? I can only find promotional videos so far. I'm looking for an hour+ of straight driving to seoe what it is like. On a related note, where is the state of the tech at right now? Last I checked the consensus was that driving itself had been mostly figured out, but it still had problem with poor weather conditions, such as the Lidar not working. Is this still the case?

Anyone who knows where the cutting edge is at can't actually tell you (due to nda). You just kinda have to guess based on what waymo said last and extrapolate.

We don't even have access to, say, year old footage? I just wanted to get an idea of it :/

p is prime >= 5, so it is odd, which makes p-1 and p+1 even. We have 2x2 | A. Once we divide p-1 and p+1 by 2, we have 2 consecutive numbers. One of them has to be even, that's another factor of 2: 2x4 | A

(p-1), p, (p+1) are 3 consecutive numbers. One of them is a multiple of 3. It can't be p since p is prime. Therefore 3 | A

Since 3 and 8 are coprime, 8 | A and 3 | A implies 8x3 | A.

Can someone tell me how to get Github to let me attach a .zip to a release? When I try to attach my ~1MB .zip I get the following:

An attachment with that filename already exists. Try a different file. We don’t support that file type. Try zipping it. Try another file. Yowza, that’s a big file. Try again with a file smaller than 2GB. This file is empty. Try again with a file that’s not empty. Something went really wrong, and we can’t process that file. Try another file.

That's 5 error messages. Including one telling me to try zipping my .zip. Also my file apparently is both too large and too small at the same time. Different filename doesn't change anything, and a random tiny .txt produces the same error. And for some reason I can't find a solution via google either.

Since you're getting all possible errors when doing anything the only logical course of action is obviously not uploading anything! The .zip file should magically appear at some point!

Ok it worked on Internet Explorer. It didn't work on the latest version of Firefox, without any blocked scripts or anything for Github... I don't get it.

On March 20 2017 05:01 Blisse wrote: What are you using to make the zip file? I've never had that issue before using 7-zip or Linux's commandline.

Just right click -> send to -> compress in windows explorer. Usually good enough for tiny stuff like that one. The .zip wasn't the problem, anyways... probably. Especially considering a plain .txt wouldn't work either.

should length of string matter when searching for a particular string in a list?

so basically I was asked for an example of an algorithm or method to find a string in a sorted list of strings and to describe time complexity in regards to size of list and length of the string we are searching for.

the example I gave was a binary search tree where the root node would be the index n/2 in the sorted array and then proceed down left or right depending on alphabetical value (ex. left if we're looking for 'hello' and root node is 'igloo').

time complexity O(m log n) where n is size of list and m is length of the string we are searching for. log n for going down the tree, m for number of characters we could possibly compare to judge which is 'greater' alphabetically at each node.

is my line of thinking correct?

also I'm pretty sure this isn't the efficient way to match a string in a sorted list? but I was just asked for an example of a way so yea lol

should length of string matter when searching for a particular string in a list?

so basically I was asked for an example of an algorithm or method to find a string in a sorted list of strings and to describe time complexity in regards to size of list and length of the string we are searching for.

the example I gave was a binary search tree where the root node would be the index n/2 in the sorted array and then proceed down left or right depending on alphabetical value (ex. left if we're looking for 'hello' and root node is 'igloo').

time complexity O(m log n) where n is size of list and m is length of the string we are searching for. log n for going down the tree, m for number of characters we could possibly compare to judge which is 'greater' alphabetically at each node.

is my line of thinking correct?

also I'm pretty sure this isn't the efficient way to match a string in a sorted list? but I was just asked for an example of a way so yea lol

Usually, in tests big O questions simplify their keys before putting them into the data structure of choice, like collapsing strings to integers before inserting them to data structures. If you get such a question, your teachers must be pretty sadistic . Focusing on how to apply general algorithms is what's more important, and in the practical(real) world, big O notation will matter less and less as you optimize more for the cpu cache.

But to answer your question, it depends on the algorithm. If you precalculate sizes, it's O(log n). If your string searching uses something like strcmp, then you could argue for something else. Edit: I'd still argue for O(log n) in the strcmp case.

should length of string matter when searching for a particular string in a list?

so basically I was asked for an example of an algorithm or method to find a string in a sorted list of strings and to describe time complexity in regards to size of list and length of the string we are searching for.

the example I gave was a binary search tree where the root node would be the index n/2 in the sorted array and then proceed down left or right depending on alphabetical value (ex. left if we're looking for 'hello' and root node is 'igloo').

time complexity O(m log n) where n is size of list and m is length of the string we are searching for. log n for going down the tree, m for number of characters we could possibly compare to judge which is 'greater' alphabetically at each node.

is my line of thinking correct?

also I'm pretty sure this isn't the efficient way to match a string in a sorted list? but I was just asked for an example of a way so yea lol

Usually, in tests big O questions simplify their keys before putting them into the data structure of choice, like collapsing strings to integers before inserting them to data structures. If you get such a question, your teachers must be pretty sadistic . Focusing on how to apply general algorithms is what's more important, and in the practical(real) world, big O notation will matter less and less as you optimize more for the cpu cache.

But to answer your question, it depends on the algorithm. If you precalculate sizes, it's O(log n). If your string searching uses something like strcmp, then you could argue for something else. Edit: I'd still argue for O(log n) in the strcmp case.

hm.. i didn't really get much details besides that we were to give time complexity in terms of length of say string s (string we search for) and n (size of list). So I understood that as wanting us to determine whether string length will matter in time complexity. I decided yes because I didn't get any details for how the strings would be compared and since they asked us about length I probably assume the answer is looking for us to include character comparison

If you're looking for an example where the length of the string is the main factor of complexity, you should look into tries, which are great to store strings. (https://en.wikipedia.org/wiki/Trie)

Insert, delete and search are all O(m), where m is the length of the string.

If you're looking for an example where the length of the string is the main factor of complexity, you should look into tries, which are great to store strings. (https://en.wikipedia.org/wiki/Trie)

Insert, delete and search are all O(m), where m is the length of the string.

tries do seem like a better example. next time I'd probably go with that one. I assume list length does not matter in complexity when using tries since we're just going down the tree until all characters are matched.

is my line of thinking with using a BST and its complexity in terms of string length and list length correct though?

should length of string matter when searching for a particular string in a list?

so basically I was asked for an example of an algorithm or method to find a string in a sorted list of strings and to describe time complexity in regards to size of list and length of the string we are searching for.

the example I gave was a binary search tree where the root node would be the index n/2 in the sorted array and then proceed down left or right depending on alphabetical value (ex. left if we're looking for 'hello' and root node is 'igloo').

time complexity O(m log n) where n is size of list and m is length of the string we are searching for. log n for going down the tree, m for number of characters we could possibly compare to judge which is 'greater' alphabetically at each node.

is my line of thinking correct?

also I'm pretty sure this isn't the efficient way to match a string in a sorted list? but I was just asked for an example of a way so yea lol

The length of the string you are searching for is irrelevant for Big O notation. Basically the vast majority of string compares will results in discovering the strings are different after looking at the first character. As you get to the bottom of the tree, then it is more likely you will get more matches, but even if your search string is super long it does not matter because there is no need to read past the length of the string you are comparing against. So if you are comparing foohksahfjhsfhlkshklsjhalkjslkjaksjdfalksjldfhf to foo everything after the first h is irrelevant. If the string you are searching for were a gigabyte it would be irrelevant to run time because every string compare will end by the time you read to the end of the string you are comparing it to.

wouldn't that still be compatible with the idea that you have to search through length of string s? because we are looking for worst case, the worst case would be length of string s. foohsdfsdf compared to foo would stop after the first h (far from worst case), but if it were reverse with foo being the search string, it would also stop after we finished comparing f,o,o. the number of comparisons equaling length of string s (worst case for that length of string).

so i'm not quite understanding why string length wouldn't be relevant there

On March 22 2017 06:59 dsyxelic wrote: wouldn't that still be compatible with the idea that you have to search through length of string s? because we are looking for worst case, the worst case would be length of string s. foohsdfsdf compared to foo would stop after the first h (far from worst case), but if it were reverse with foo being the search string, it would also stop after we finished comparing f,o,o. the number of comparisons equaling length of string s (worst case for that length of string).

so i'm not quite understanding why string length wouldn't be relevant there

My answer about it being irrelevant is in the case where the string is not in the tree. The question was phrased in a way such that N is the size of the string you are searching for, but the list of strings is held constant. As N is increased it eventually outpaces all the strings in your tree and is by definition not in your tree so the length is irrelevant. If the size of the strings in the tree is simultaneously increased and they all start with the same characters as yours then it would be O (N), but that is not how the question was phrased. Obviously if you find your string in order to prove they are in fact the same you would have to read both strings in their entirety and thus we are back to O (N).

TLDR: Worst case is O (N) until your string is longer than any in tree at which point it is O (1).

well its asking for time complexity based on the size of the list so i dont think we can expect the size of the list to be constant. the question paraphrased is : 'describe an algorithm that searches for a string s in a sorted list size n. what is the time complexity of the algorithm in terms of the length of string s and size of list n'.

since I chose a BST, I said O(m log n) with m = length of s .

edit: also even if the string we're searching for outpaced the size of the strings in the list, wouldn't that still be O(string length) for the search operation anyways because it's not worst case?

edit2:

to expand on the trie example, it would be O(m) where m is the length of string s for matching. however if we care about inserting the list of strings into the trie it would be O(mn)? where again m is length of string s and n is size of list.

O(m) for insertion into trie and it would be n number of insertions.

I think this is correct line of thinking? anyone please correct me if I'm wrong

There's no reason to construct a BST or a Trie since it's clearly just a one time search algorithm and not a data structure question. If you construct a BST or Trie you do indeed spend extra O(N*M) on inserting every string in the list like you said (when the strings in the list are each of length M). If the "list" is an array you can do O(|S| * log N) without any additional data structures.

(In case the "list" is a linked list you can simply compare every string in O(N*|S|).

oh then i misinterpreted the question :/ they didnt specify what kind of list it was so I guess we can assume it's either an array implementation or a list by pointers

I went with array cause that seemed simpler

k then thinking of it now with some code

def match(list, s): start = -1 end = len(list) while end-start>1: p = (start+end)/2 if list[p] == s: return True if list[p] > s: end=p else: start=p return False

though of course the comparisons would only work if I had a some other function to compare the characters of the string or the value of the string for alphabetical comparison.

this function would run in O(log n) total O(m log n), m = size of string s for character comparison

On March 22 2017 19:03 dsyxelic wrote: oh then i misinterpreted the question :/ they didnt specify what kind of list it was so I guess we can assume it's either an array implementation or a list by pointers

I went with array cause that seemed simpler

k then thinking of it now with some code

def match(list, s): start = -1 end = len(list) while end-start>1: p = (start+end)/2 if list[p] == s: return True if list[p] > s: end=p else: start=p return False

though of course the comparisons would only work if I had a some other function to compare the characters of the string or the value of the string for alphabetical comparison.

this function would run in O(log n) total O(m log n), m = size of string s for character comparison

or O(|S| log n) with the way you expressed it

Note that if the strings are random then average case remains O(log(n)) because string compares are constant with random strings.

If the strings are worst case, but your location in the array is random then you can improve to: O(m + log(n)^2).

You can keep track of the right most string you compared against to the left of you and the left most string you compared against to the right of you and how many characters they have in common up front. All strings in between them also have those characters in common so you can skip those characters when doing comparisons.

While this improves average case assuming a random location, it does not improve the worst case since you might never traverse both left and right so you cannot take advantage of that trick. Basically if your string is first or last in the array that would be a worst case scenario.

There are some other tricks you can do to improve that case such as walking up from the bottom left corner of the tree, but even in that case if you walk half way up a tree of depth 30 you have walked a distance of O(log(n)) and you only removed ~32k out of one billion possibilities so that has a bad worst case.

Has anyone thought of a way to do it faster than O(m*log(n)) in the worst case?