Jump to content

odoylerules

Members
  • Posts

    45
  • Joined

  • Last visited

  • Days Won

    2

odoylerules last won the day on May 8 2017

odoylerules had the most liked content!

LabVIEW Information

  • Version
    LabVIEW 2013
  • Since
    2009

Recent Profile Visitors

1,777 profile views

odoylerules's Achievements

Newbie

Newbie (1/14)

8

Reputation

  1. As someone who does a lot of javascript and nodejs programming. I use it all over the place and often wish Labview could parse it as well as some other languages.
  2. Its stuff like this why i don't develop a Windows target and RT Target in the same project any more. Locked classes/libraries are such a pain. I'm not sure that will solve your issue since it would still probably recompile on you. Do you know if the items that are recompiled happen to marked Read Only in the OS? That's caused me issues before on re-compiles and strange saves.
  3. Well partly yes, i do have an issues with this. However, i also have an issue with global dependencies and honestly there may not be away around that based on how labview was designed. I don't have as much experience with Labview as a lot of people here so i'm sure there are lots of use cases where it might be needed. However, i think there are two types of labview "packages". One would be your IDE extensions such as the G-code manager that you linked. These types of packages work well with VIPM and are good for extending the functionality of the labview IDE. However, i think project specific dependencies, such as OpenG, should live in the subfolder of the project. This would basically mean having a separate copy of OpenG for every project you are working on. Personally i think this makes sharing code a lot easier than running a VIPC file every time you change projects. It compartmentalizes project specific dependencies away from the IDE/Program Files location and into a specific place. It also prevents one project from touching the dependencies of another. So there may be more downsides than not, i'm not sure, but i think its a better approach than global dependencies.
  4. I've considered writing one, however, i have a different philosophy on dependencies and prefer all dependencies not native to labview live underneath the main project file instead of sharing files across projects within user.lib and vi.lib. Pallets would be stored in the user libraries section of the pallet viewer and all pallets would be re-written to point back to the project specific dependencies directors instead of user.lib and vi.lib. Something feels really wrong having to rewrite program files directories to maintain dependencies. I've also considered using NPM, which is the nodejs package manager, to maintain labview packages instead of vipm in combination with the approach above. This a little more involved as i need to write a script to handle different types of packages and it requires packages to have a package.json instead of a "spec" file inside of them. So there would be additional work with that. Anyway, i haven't started anything yet, but i'm also curious what else it out there.
  5. Well obviously you would prefer to continue to SCC war so i guess that's an end to this thread. As i mentioned the purpose of this request was to address the ability to easily share and collaborate on code, given the current tools and options of labview. The Github sharing culture is a standard for all sorts of other programming languages and in my opinion one that Labview should move towards and mirror. I could go on and on about issues with Labview. Honestly its a pain to use compared to other languages. While i appreciate your comments, complaining about labview's source control issues has nothing to do with this thread. If this type of source sharing doesn't work for you that's fine, continue zipping and copying files, nothing is stopping you. In fact, you can upload zip files to bitbucket/github all day long. However, I'm still not understanding your argument. Merging changes is hard so we should only use zip files and forum post and we shouldn't make it easier to track changes and collaborate outside of the lavag community? Obviously the reluctance to change is strong in the community and that's unfortunate given that this website holds knowledge from all the best labview programmers. However, if you really want the popularity of labview to grow, a strong open source involved culture is key. Learning to use Github or a similar website to foster that culture is critical.
  6. So i'm not trying to start an SVN vs GIT war here. I personally started with SVN and switched to GIT and i don't consider myself a master of either. Some of the issues you are raising are inherent to any SCC when using labview, specifically merging, testing and code compliance. However, some of the points you are raising about GIT i don't think are accurate and are probably outdated relative to its current version. The strength of GIT is its distributed model and its ability to clone and fork without impacting other repositories. I personally find its to be its biggest strength. It makes collaboration a lot easier. The ability to easily branch, fix an item and then bring it back into your main development branch is essential and just not something you can do easily with SVN. They are very different SCC methods and i think there is some confusion on how git works in this thread. However, disregarding the SVN vs GIT debate, my main thought was more on the distribution side as SourceForge is just not a good host for things like this. I suggested Github.com, not expecting the maintainers to switched to GIT, but specifically b/c you can import SVN repositories and use SVN commands/clients while hosting the code on Github.com. In addition, github.com provides binary distribution, issue tracking and management from a single unified place. While i read lavag.org all the time, using a forum to track code issues is a very outdated method. Now i'm not exactly familiar with the past OpenG development process and maintenance procedures, it obviously a mature library and so changes are rare. But one thing i think the Labview community should really start embracing is the open source model and project distribution structure that you find on github.com. The ability to collaborate and share code in a centralized location is what makes github so powerful and popular. There is a reason it has become so big in the last few years, its b/c its model works. So basically my thought was since OpenG is so popular it would be a good test source for the Labview community on sharing open source code. If it isn't a good candidate for this, then no big deal.
  7. I was curious if there has ever been any consideration of transferring the OpenG repository from SourceForge to either GitHub or Bitbucket? Both services support direct SVN repository import. It seems to me that it would be a much better way for others to contribute code to the project with pull requests and track issues such as the LinuxRt zip files changes. Sourceforge just doesn't offer any of these collaborative tools that have become a standard in the last few years. In addition, both provide direct binary file downloads, with Github's release function being extremely useful. Considering OpenG might be the most used open source Labview tool, i personally think it would be nice to follow the best practices of other languages open source project code sharing and collaboration. PS: SourceForge has had some shady practices lately with bundling crapware with installers. Gimp for example. Not really an issue for .vip packages but still shady.
  8. I've been using other languages too much recently. I had totally forgotten about this thread and the general Labview name spacing issues. My dream of project dependencies solely within the project directly may be a dead one until NI changes some things. Even Git Submodules may not be a good solution b/c of the library linking issues highlighted in that thread.
  9. I've been thinking about this issue a lot lately and having recently been writing code in Nodejs, I've been considering writing a labview tool that mirrors some of the functionality of the NPM way of handing Nodejs dependencies. Unfortunately there are pro's and con's to both of the solutions your outline. I personally would avoid #1 at all cost. Checking in re-use code to your project repo is just asking for pain down the road. If you pursue this route, then i would recommend looking into Git Submodules. However, even those are pain to use but are probably a better route then checking code directly into the project. As far as #2 goes, this is one of my biggest issues with labview. It would be so nice if you could have the functionality of the user.lib folder but have it associated within the project folder or on a project by project basis. If you go the user.lib route, or use the vipc route, you are forcing the programmer to only work on essentially one project at a time. It has always felt so heavy to me to have to modify the labview program directory to update re-use libraries. However, if your team is heavy into using the pallets, this is the only way that i know of, to get pallets for your libraries working in labview. This is also pretty much the standard way of doing things currently. This is probably the better route to go for the time being. A nice future labview feature to have would be to load pallets out of the project instead of the user.lib Anyway, the approach i'm considering is similar to NPM, why mess with something that works great. Each project would have a file associated, similar to package.json, that would contain links to the Git repositories that hold your dependencies for the project. These dependencies would then be installed into the project folder under a single "dependency" sub folder that you would ignore in your .gitignore. Each re-use library would have its own subfolder within this dependency folder. The benefits of this system would by that you could then only check in your "dependency" file to your repo. The user would then "install" these dependencies after they checkout the main project repo. This process would involve running the tool which would clone the individual library repositories into this "dependency subfolder". The file location relative to the project file should always be constant so you shouldn't have to worry about linking issues. In addition, it gives you the ability to keep all of your re-use libraries in separate repositories. If someone updates a re-use library at a later time, all you have to do it pull down the latest version off of that libraries repository. It also allows you to have different versions of re-use libraries associated with different projects at the same time. The main downside i see is that you would lose pallet functionality for these libraries. I am a big quick drop user, so this doesn't affect me, but could be an issue for others. Also, this solution probably only works for GIT based source control. I've used SVN before but i'm not as familiar with it. There may be other downsides i'm missing i would like to hear about. I may start a new thread and if i get something working i'll probably look for feedback from the community.
  10. Welcome to LVOOP... this is one of my biggest gripes about using it for certain things. The work arounds listed above help, but using LVOOP design patterns can make some of the simplest things so heavy. LIke these guys mentioned before, you can make a new class that contains the shift register data of the NI method you want to use and write a custom method for the action want. Then include this class in your parent and pass it around in each child. This sounds like an interesting approach, but lately, LV2013 SP1, i keep getting burned by hacking around the call-by-reference nodes to try and fix issues like this. There seem to be some weird deep labview crashing bugs that i keep stumbling upon when i do things like this.
  11. A tcp/ip server is going to be the most flexible, its just takes more to set up. If you do go with network streams have you seen this white paper? http://www.ni.com/white-paper/12267/en/ You have to set up your streams correctly otherwise you won't be able to connect. Also, i believe if you have to shutdown the "server" stream and re-open it if you want to have a new endpoint connect to it.
  12. He makes some good points but i would guarantee that there are a hundred articles out there that would argue against every point he makes much better than i could. I would say people just need to work with whats best for them. I use GIT and i'm sticking with it. The ease of branching is what made me switch from SVN. Making a branch, testing a change and merging it back in is fast and easy. I will say i do miss the locking aspect of SVN, especially since labview like to randomly touch files in the project.
  13. If this is really command you feel is safe for anyone to send at anytime to the CRIO, have you considered handling this "message in a separate loop on the crio that maintains some crio state information. Then you anyone can send the command at any time and if your CRIO is in a state that can process it, then process it, otherwise discard the message or queue it up for a later time. As far as implementing this i'm not 100% what to tell you. In all my applications i specifically avoid multiple host that have the ability to send commands. I'm not sure if a networked buffered shared variable that you can write on multiple host works like you want or not. Basically a network multi writer queue. If it does, then it could be a decent route for your application. If i was going to do something like that i would probably set up a tcp/ip server on my crio and handle all "commands" using that. It will accept multiple connections from different hosts. You could then parse messages(commands) from different hosts based on their IP or connection sequence or a multitude of other things. If you haven't seen it yet, this library may help you out with making a TCP/IP server. http://www.ni.com/example/27739/en/ I have used it before on a CRIO to handle commands.
  14. I second Shaun's comment. I have found that using multiple network streams to connect to different clients becomes extremely CPU dependent for crio's, especially if you need fast updates. I'm not 100% sure what you mean buy " transform signal to sound ". I would probably recommend performing this action on the Host computer either way. I would push the raw data to the host and then do your transformation on the host. What i would recommend would be to make a single network stream that handles all the "commands" that the host might need to send to the CRIO. The benefit of this is that you would only be able to have one host connected at a time with this stream. This also gives you a single point from which to issue commands so you don't have multiple people trying to control the crio at once. As far as sharing data back out to multiple clients, shared variables do work well for this sort of thing. They have their other downsides but this is probably a good place to start. I would highly recommend that you only read data using shared variables and not write using them. All your variables on your host would have their access mode set at "Read" and none as "write. This once again prevents multiple people from issuing commands at once since you will have multiple clients connecting to the CRIO. One other thing i might recommend is that the if you do use shared variables, that you set the one that give your your "Sound" raw data to be have network buffering. Since it appears to be the critical component of your system, this should help with disconnects or network issues, to keep your data stream intact. Hope that helps
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.