Thread Rules 1. This is not a "do my homework for me" thread. If you have specific questions, ask, but don't post an assignment or homework problem and expect an exact solution. 2. No recruiting for your cockamamie projects (you won't replace facebook with 3 dudes you found on the internet and $20) 3. If you can't articulate why a language is bad, don't start slinging shit about it. Just remember that nothing is worse than making CSS IE6 compatible. 4. Use [code] tags to format code blocks.
On January 19 2018 02:58 travis wrote: Well I am doing it because I really enjoy the puzzle and I feel like I am learning a lot.
As for functional programming, aren't loops faster?
It depends on what you want to do with your code. With functional programming it's easier to paralellize things. You'll be consuming more peak memory but will be able to traverse graphs asynchronously.
For TSP for example you could either use some form of Scatter-Gather pattern or something like that:
Process starts at node 0. It goes to next node, stores all the visited nodes in order. When it finds a branching node it chooses one of the branches at random to proceed and spawns new processes for all the other branches, passing them visited node list up to that point. Each of the processes then follows this pattern. When one process reaches the destination you pause it and all other processes compare themselves to it with each visited node. If a process has more nodes in the list than the process that reached the destination you kill that process (we're no longer interested in this path). When another process reaches the destination and it has less nodes than the paused process, you pause that one and kill the paused process. The solution to TSP will be the last process standing.
For most of the processes this won't even be an O(n) problem. And it won't ever be greater than O(n) for any of them.
It's way more efficient than looping because you're checking multiple paths simultaneously. Languages like Erlang and Elixir have really light processes that can communicate with each other even across different machines in the cluster.
This, in my opinion, would be way better than any imperative/synchronous solution you can come up with. But I'm pretty noob at algos and I might be completely wrong. I'm also sure that this could be optimized somehow.
To be fair you can do all the same parallelism using loops and similar constructs. Avoiding race conditions just comes more naturally with many functional languages because of enforced immutability.
It is also harder to do parallel loops that can share data between each other.
The closest comparison would be with how they're tracking bees to solve TSP. A typical worker bee has a set of flowers it visits numerous times during a day. The bee will fly out and randomly go from flower to flower. After 20 or so runs it'll pick the most optimal route (of those 20, not necessarily the most optimal route, just good enough for the bee as it doesn't want to waste any more time on checking other permutations). That's basically heuristics. Now, imagine that instead of one bee doing all those runs you can have thousands or millions of bees working the same set of flowers and comparing notes on the routes all the time.
Such a thing would be easiest to implement in a language like Erlang. Which comes with lightweight processes and inter-process communication built-in (OTP). Doing this in Java using loops would be a nightmare. Even if you'd use functional concepts in Java you'd still need a lot of work.
On January 19 2018 02:58 travis wrote: Well I am doing it because I really enjoy the puzzle and I feel like I am learning a lot.
As for functional programming, aren't loops faster?
It depends on what you want to do with your code. With functional programming it's easier to paralellize things. You'll be consuming more peak memory but will be able to traverse graphs asynchronously.
For TSP for example you could either use some form of Scatter-Gather pattern or something like that:
Process starts at node 0. It goes to next node, stores all the visited nodes in order. When it finds a branching node it chooses one of the branches at random to proceed and spawns new processes for all the other branches, passing them visited node list up to that point. Each of the processes then follows this pattern. When one process reaches the destination you pause it and all other processes compare themselves to it with each visited node. If a process has more nodes in the list than the process that reached the destination you kill that process (we're no longer interested in this path). When another process reaches the destination and it has less nodes than the paused process, you pause that one and kill the paused process. The solution to TSP will be the last process standing.
For most of the processes this won't even be an O(n) problem. And it won't ever be greater than O(n) for any of them.
It's way more efficient than looping because you're checking multiple paths simultaneously. Languages like Erlang and Elixir have really light processes that can communicate with each other even across different machines in the cluster.
This, in my opinion, would be way better than any imperative/synchronous solution you can come up with. But I'm pretty noob at algos and I might be completely wrong. I'm also sure that this could be optimized somehow.
That's O(n!)... (in the number of processes) No amount of parallelism will save you if you use brute force for TSP.
On January 19 2018 02:58 travis wrote: Well I am doing it because I really enjoy the puzzle and I feel like I am learning a lot.
As for functional programming, aren't loops faster?
It depends on what you want to do with your code. With functional programming it's easier to paralellize things. You'll be consuming more peak memory but will be able to traverse graphs asynchronously.
For TSP for example you could either use some form of Scatter-Gather pattern or something like that:
Process starts at node 0. It goes to next node, stores all the visited nodes in order. When it finds a branching node it chooses one of the branches at random to proceed and spawns new processes for all the other branches, passing them visited node list up to that point. Each of the processes then follows this pattern. When one process reaches the destination you pause it and all other processes compare themselves to it with each visited node. If a process has more nodes in the list than the process that reached the destination you kill that process (we're no longer interested in this path). When another process reaches the destination and it has less nodes than the paused process, you pause that one and kill the paused process. The solution to TSP will be the last process standing.
For most of the processes this won't even be an O(n) problem. And it won't ever be greater than O(n) for any of them.
It's way more efficient than looping because you're checking multiple paths simultaneously. Languages like Erlang and Elixir have really light processes that can communicate with each other even across different machines in the cluster.
This, in my opinion, would be way better than any imperative/synchronous solution you can come up with. But I'm pretty noob at algos and I might be completely wrong. I'm also sure that this could be optimized somehow.
That's O(n!)... (in the number of processes) No amount of parallelism will save you if you use brute force for TSP.
Sure. But right now there's no other way to do it. I still think that parallelism is the way to go right now. It does require more computing power but it's able to produce results much faster (and is easy to scale both horizontally and vertically, which is an added benefit).
On January 21 2018 06:41 sc-darkness wrote: Has anyone got Entity Framework and Microsoft SQL Server experience? I've got a task after an interview to цреате an ASP.NET MVC project and another one which will have to provide endpoints (this is the project which will use Entity Framework). One of requirements is that both projects should work when they're on a different host each. I'm not entirely sure what they mean. I can provide a config file to configure host, but it's the weekend and I can't ask for clarification. Has anyone got an idea what they mean? Requirement seems a bit vague to me.
I used to do .NET a lot.
Here is my interpretation.
The first project is an MVC application. (This is the front end application for the web using the MVC framework)
The second project is a Rest service. (This probably also uses a controller to configure rest endpoints) The second project should use Entity Framework to access data from the database
You can launch(execute) both projects independently. You can think of this as having two "executables" that you can launch from different servers. Since this is a .NET project, it will most likely run in IIS. However, if it is an owin project, then you can launch this in either IIS or as an executable.
If it runs in IIS, do you mean it could be IIS Express and then I can probably hardcode it to a specific port number? E.g. localhost:1234?
Also, I'm still confused about database access. I'm still new to Microsoft SQL Server, and it seems you can use it in many different ways - named pipe, TCP/IP, etc. I don't see how this will work on another machine without specifying credentials or something. Maybe I need to read a bit more about this.
Yes, you can also run it in IIS express, which is the web container for visual studio. You can launch multiple programs in visual studio. You would just bind each web app to a different port as you stated.
Microsoft sql server is just a relational database. In entity framework, you just need to add the connection string to your config file. You can also specify the connection string when you create an instance of the context. Ultimately, it depends on how you will access the database. Will it be username/password or will you use windows auth?
On January 22 2018 09:32 Hanh wrote: That's O(n!)... (in the number of processes) No amount of parallelism will save you if you use brute force for TSP.
Sure. But right now there's no other way to do it. I still think that parallelism is the way to go right now. It does require more computing power but it's able to produce results much faster (and is easy to scale both horizontally and vertically, which is an added benefit).
There is a way to do it in O(2^n) and that is better than O(n!). Ideally, you would parallelize the better algorithm but otherwise, having a good algorithm is much more valuable.
On January 18 2018 00:34 ShoCkeyy wrote: Backend is boring for me, frontend is much more fun, two sides to each story. But I'm also a full stack dev for fun, now architect/strategist/optimizer professionally.
It's all cool and dandy until some asshole comes along and wants you to do one of those "pixel perfect" projects...
Also, JavaScript is bullshit.
That's where you either let go of that "client", or tell them how it is, and hope they relax. I never take on work that I feel uncomfortable with. I've learned that over my years. I've also had assholes threaten to beat me up because they broke the code, they blamed me, and tried to get away with not paying me.
Also JavaScript is a mess but it's getting a lot better, easier to work with, and it's been allowing a lot of growth in the tech space. Meaning more money for just writing javascript
Something I've always struggled with in asp.net MVC.
I have a basic page with a filter and a result set. Change the filter, and the result set will refresh. The result set includes elements which the user might want to update. I want to use ajax so that when either the filter is updated or the results are updated, the results are refreshed accordingly.
Given that nested forms is not supported by HTML, the way I've always done this is to have both the filter and the result set sit in a single large Ajax.BeginForm element with multiple submit buttons identified by name/id. The controller then interprets which submit button was called and calls the appropriate method.
Is there a better way to do this that more simpler? Ideally, the filter should be its own form that sends only the filter data back to the server when it's updated, and the result set should be its own form that sends only the result data back to the server when it's updated. But I'm not sure how to do that and still keep them in sync, e.g. click submit on the filter and the result set is updated. I'm sure you can do it with some kind of javascript event on the filter submit form that also calls a submit on the result set form, but that seems clunky to me.
On January 24 2018 07:38 enigmaticcam wrote: Something I've always struggled with in asp.net MVC.
I have a basic page with a filter and a result set. Change the filter, and the result set will refresh. The result set includes elements which the user might want to update. I want to use ajax so that when either the filter is updated or the results are updated, the results are refreshed accordingly.
Given that nested forms is not supported by HTML, the way I've always done this is to have both the filter and the result set sit in a single large Ajax.BeginForm element with multiple submit buttons identified by name/id. The controller then interprets which submit button was called and calls the appropriate method.
Is there a better way to do this that more simpler? Ideally, the filter should be its own form that sends only the filter data back to the server when it's updated, and the result set should be its own form that sends only the result data back to the server when it's updated. But I'm not sure how to do that and still keep them in sync, e.g. click submit on the filter and the result set is updated. I'm sure you can do it with some kind of javascript event on the filter submit form that also calls a submit on the result set form, but that seems clunky to me.
On January 25 2018 01:59 IyMoon wrote: If I am forced to learn Angular for my job (it is not a web dev job, but I think they are going to be doing some things with web api calls)
1) Should I hate my life? 2) Where is the best place to start
First of all, what are you doing with these api calls? Cause if it's big data, python is better for big data. Angular is a full framework, so I don't know why they would force you to learn angular for just api calls.
On January 09 2018 03:29 Excludos wrote: I'm more confused to what he's actually trying to do. Why would an app ever require access to a private wallet?
For the app's author to divert your money into his own wallet :-)
Every crypto wallet app has a private wallet unless they delegate to an external client. If done properly, offline signing is safer than trusting some service to manage your keys.
I don't see the point of the guys who are bashing the security of your app. It seems fine considering that it aims to protect a wallet held on a phone and what they say seems not applicable in this context. I'd look into the fingerprint api that links with the keystore.
How about 1. generate a long random string as the wallet password 2. generate a keypair in the keystore 3. encrypt (1) with (2) 4. store the result in a db 5. link (2) to finger print auth
Thanks for this post. I've been trying to implement wallets for my app, rather than ask for their private key, and this really is helping me clear up how my understanding of how to do that properly. Especially the idea of having a "proxy" account with my app that they just transfer some ETH to.
On January 25 2018 01:59 IyMoon wrote: If I am forced to learn Angular for my job (it is not a web dev job, but I think they are going to be doing some things with web api calls)
1) Should I hate my life? 2) Where is the best place to start
First of all, what are you doing with these api calls? Cause if it's big data, python is better for big data. Angular is a full framework, so I don't know why they would force you to learn angular for just api calls.
I am only guessing. My manger just walked by and asked if I knew Angular and I told him no but I could learn it if needed. He told me that they were thinking of using it for something and would let me know if they decided on it.
Is anyone into Linux and networking? Could you suggest something to read over the weekend about this topic? Of course, I can google something but I was wondering if there any recommendations. I've just been advised to read about it but no specifics; I doubt TCP and UDP would be enough because they're cross-platform. Maybe something like multicast and IPC protocol?
On January 25 2018 07:45 sc-darkness wrote: Is anyone into Linux and networking? Could you suggest something to read over the weekend about this topic? Of course, I can google something but I was wondering if there any recommendations. I've just been advised to read about it but no specifics; I doubt TCP and UDP would be enough because they're cross-platform. Maybe something like multicast and IPC protocol?
Given the last assignment you brought up here, I think you need to start asking the right questions to whoever is assigning these tasks to you.
No one wants to feel stupid by having to ask their coworkers the simple questions, but I would much rather have someone ask me for details up front instead of spending a weekend shooting in the dark and hoping they hit the right mark. And, in general, it's a completely applicable skill in almost everything you're going to do. Plan properly first, and then work, don't work without planning.
On January 25 2018 07:45 sc-darkness wrote: Is anyone into Linux and networking? Could you suggest something to read over the weekend about this topic? Of course, I can google something but I was wondering if there any recommendations. I've just been advised to read about it but no specifics; I doubt TCP and UDP would be enough because they're cross-platform. Maybe something like multicast and IPC protocol?
Most (if not all) networking protocols is cross-platform, including multicast and IPC.
I'll have to echo WolfintheSheep here. Linux isn't really any one thing you can read about. You need to know more specifically what you should look for. "Linux and networking" can literally be a million different topics. Are you going to set up a network? Share folders between Linux computers on a network? Hot/cold backups? Do you need to read up on the different network layers? Every protocol ever?
Just to give you an idea on the difficulty on what you're up against here: A few years ago I studied in Australia for half a year and took a course in System Administration (Ie: Linux and Networking). I spent something like 30 hours a week on that subject alone and barely managed to scrape by the exam, and I would still consider myself a complete newbie in the field. This isn't something you can just "read up on the weekend" without having any idea what topic you're supposed to read up about.
edit: I guess I feel a little bit of an ass here, but you're not giving me a whole lot to work with either. How experienced are you with Linux to begin with? Is it mainly the networking you need to learn or is the whole of Linux as well? If the latter: Just get it installed and play around with it. That's plenty to do over a weekend if you're completely unfamiliar with it. If it's the prior; Read up on the different layers of networking; all the way from the physical ones and zeroes to the application layer. And read up on some of the protocols; how they work and what they're mainly used for. It might also be a good idea to read up on what networks consists of and how to set up a IP network (staticly and manually. Automated and DHCP is too easy ).
So I have a database that is currently an Excel file of about 150 entries (lines), each entry has about 100 or so properties (columns). I want to parse this and make a plain text file to make life easier in the future. The properties are a mix of floats and strings.
Are there any standard formats for storing such data, or should I just go for comma separated? Also for accessing, is looking in SQL overkill for this size of database (i.e., pretty small)?
On January 26 2018 02:26 emperorchampion wrote: So I have a database that is currently an Excel file of about 150 entries (lines), each entry has about 100 or so properties (columns). I want to parse this and make a plain text file to make life easier in the future. The properties are a mix of floats and strings.
Are there any standard formats for storing such data, or should I just go for comma separated? Also for accessing, is looking in SQL overkill for this size of database (i.e., pretty small)?
What do you plan on doing with this data? SQL might not be overkilled if you're saying theres 150 lines, but 100 columns. But it also depends on what you're doing with the data. SQL if you're just trying to store it some where else. If you're trying to manipulate the data, then you can try using python or javascript to parse from excel, and display what you want.
I'd also go with JSON or CSV if you decide to change the format from excel. Those are formats that can be easily imported to almost any system now a days.
On January 25 2018 01:59 IyMoon wrote: If I am forced to learn Angular for my job (it is not a web dev job, but I think they are going to be doing some things with web api calls)
1) Should I hate my life? 2) Where is the best place to start
First of all, what are you doing with these api calls? Cause if it's big data, python is better for big data. Angular is a full framework, so I don't know why they would force you to learn angular for just api calls.
I am only guessing. My manger just walked by and asked if I knew Angular and I told him no but I could learn it if needed. He told me that they were thinking of using it for something and would let me know if they decided on it.
If you have to learn something, and just use API calls, Vue has a way easier learning curve than Angular, and a large growing support team. The difference being Angular is a full MVC framework, while Vue is only the view layer (lol hope its obvious).