Jump to content

smithd

Members
  • Posts

    763
  • Joined

  • Last visited

  • Days Won

    42

Posts posted by smithd

  1. I don't think thats possible. If I remember ecat correctly, in your scenario one of the devices that is forwarding the data packet has been removed, so the packet never returns to the host (http://www.ni.com/white-paper/7299/en/). You'd have to probably rescan modules to update the configuration (so it knows the last device is gone) and then change mode back to active. Supposedly beckhoff claims to have a 'hot connect' feature but I don't think NI has that.

  2. 20 minutes ago, Tim_S said:

    Put the entry "HideRootWindow=True" into the ini file for the executable.

    For windows you wish to hide, I believe you also have to disable allow users to minimize window. That may not be the case if the window itself is set to hidden.

  3. 1 hour ago, monzue said:

    I was trying to code a project in C#, and I was looking for a C# library that would be similar to what you have here.  Do you know of any? Is there a library that is the inspiration for this messenger library?

    I can't speak for him, but the concept of queued actors has been around for a long, long time. The two c# actor-type frameworks I'm aware of are:

    http://getakka.net/
    Which is based off the java framework of the same name (about a decade old now), which in turn was inspired by erlang (30 years old). Akka seems to be extremely popular, and their website has an obscene amount of documentation.

    https://dotnet.github.io/orleans/
    Is slightly unique in that its intended for distributed systems (their claim to fame is that various xbox game services like halo 4 and 5 use it). Similar to this is http://proto.actor/docs/what is protoactor but this protoactor project seems to have the advantage as far as minimizing dependencies...but its also much much newer and not backed by microsoft.

  4. 2 hours ago, hooovahh said:

    NI just takes a constant of -8000 and subtracts whatever the response value is and uses that as the LabVIEW error code.  This is quite handy because when you get an error code from LabVIEW it will state things like "Service Not Supported" which tells me about type of issue I'm having.  These are the times of error codes that I can't really find an equivalent existing error code for.

    I agree, except thinking about it made me realize thats a pretty common need if you're interfacing with another device. Instead of "bad parameter" its "you connected to the device successfully, but it says that you gave it a bad parameter" or "you connected to the device successfully but it had an internal error processing your request", etc. There will always be some protocol or application specific errors to be sure, but having a generic family of "this standard error happened, but it didn't happen here it happened on the remote system" makes sense to me. And of course I realized thats almost exactly what http implements: http://www.restapitutorial.com/httpstatuscodes.html . 400 series is "you broke" and 500 series is "I broke" but they are all common enough errors. Except for 418 I'm a teapot.

  5. 2 hours ago, jacobson said:

    Our systems engineering group has a table where they keep track of projects/toolkits, error code range assigned, and the project owner. When you need custom error codes for your project you will get "assigned" the next chunk of 50 error code values on the list. These error codes are now yours to do with as you please.

    I don't have to manage this list so I can't speak to the drawbacks there but from the perspective of someone who needs to use the system it's pretty nice. I sent a message to someone asking what error codes I should be using and they sent me a link to the internal list, added my name, and told me what error codes to use.

    There was a fairly long period where it wasn't managed, people just went onto the internal page and edited it. Or at least thats what I did.

    To answer the question of where to keep them, we kept them with each project (included in the package build) and a completely separate range for each project as mentioned. 50 is an arbitrary range but seemed to work well for full projects and I selected 25 for smaller ones (like plugins). The closest I ever got to 50 was probably modbus, and that was only because (just as darren did above) I translated modbus errors which bloated the range a bit. Once set, it was rare to change the generated code.

    8 hours ago, hooovahh said:

    Personally I think it starts with keeping the number of custom error codes to a minimum....  I understand the benefit of custom errors, but the dependency issues, and code ranges usually means I just stick with the ones on the system.

    This is what I've been doing more recently. 80% of errors fall into 10 codes with maybe some extra metadata. So I made myself a little VI (basically this or this) that does the <append> tag and merges errors if needed, and lets you either use call chain or hard-code a source, and then I twirl through the explain error dialog until I find an appropriate error. This seems to be in line with what those linux guys do. EAGAIN means try again, ENOBUFS means "I'm sorry dave".

    Maybe I'm an outlier, but I don't see a ton of added value to the error code past the boolean. As above, linux uses some codes like EAGAIN to signal conditions which are arguably not errors, but labview's easy multiple return values means I'm 0% likely to encode "try again" as an error. So while I check for errors a lot (and cut off error wires at the source where possible), I really don't care about what error it is except to log it, and...lets be honest, error 56 works absolutely just as well to indicate a timeout as -1073807339 does. And while people may hate me, error code 1 seems like a perfectly sufficient way to indicate an argument error provided you append something human readable like "please enter a name into the field, dummy, empty string doesn't work".

  6. I have a control system that streams lots of data across a local network. If I configure the operator's computer so that they can also log onto the organization network as well (via firewalls, etc) the packet sniffing and a/v software drives CPU usage from 10% up to 50% :(

     

  7. Considering how long thats been a security concern I'm surprised there isn't something you can buy which adapts a random usb safely to your system. Like a raspberry pi that reformats every time it powers on and automatically exposes any mounted USB as a network share, or something along those lines (or as if its a DVD drive as described above). Or maybe there is such a device and my 10 second google search didn't find it :/

     

    Am I correct in saying that there isn't a similar concern with (micro)sd cards? Or do they also have the potential for firmware hijacking?

  8. 5 hours ago, LogMAN said:

    Yes, I tried sub-modules and decided against them. Not sure about Hg or SVN

    .....

    Of course these are just my experiences based on our workflow. It could be an entirely different case for you.

    It sounds like your first comments are on organization which I get. When it comes to developing our current common library + mid-level dependencies I found that I very often would add things and forget to set up the palette correctly, so I kind of got use to plumbing the depths of vi.lib to find the files I needed...but yeah, not ideal for someone approaching it for the first time.

    Maintenance I'm less concerned about. I think submodules are a bit different than subtrees here, but with subtrees you have to tell git what it links to, so its actually possible for the code owner or project lead to have their git instance "know" about the subtrees while for everyone else they just show up as sections of the same repo. Then if something needs to be merged to the main repo its a bit more controlled -- you sort of have to know what you're doing.

    The other pros I can see are:

    • Its a lot easier to do continuous integration work when you don't have to deal with vipm and instead everything is all right there in one neat package ready to build
    • it lets you work with different versions of the common library more easily, for example still being able to deploy dependencies of v1.1 while working on v1.2
  9. 5 hours ago, ShaunR said:

    That's my point. The UI didn't just get a rewrite. LabVIEW, as it is now, is slowly being grandfathered in favour of LabVIEW.NET. All the community contributed code will eventually be obsolete and the forums are example led not white-paper led!

    I don't know about that. To pick some examples from the main labview forum:

    https://lavag.org/topic/20126-aligning-two-waveforms/
    I can't imagine that this would become obsolete. Sure some of the nodes might change (at NI week they showed off how they are distinguishing between two use cases polymorphs currently cover, so maybe we see "feature detection{2d dbl, peak}" rather than just "peak detection.vi"), but i think the valuable part is the discussion here.

    https://lavag.org/topic/20151-how-to-draw-circles-and-lines-in-intensity-graph/
    Yes this will obviously become obsolete. But, its also in the wrong forum anyway, arguably, as its a user interface question. Lava doesn't have anywhere near enough volume to matter.

    https://lavag.org/topic/20169-storingimporting-daqmx-task-configurations/
    Will probably become obsolete as well, but ideally in favor of an obvious built-in solution to the problem, rather than a hack

    So I mean sure there will be some obsolete stuff during the transition but I highly doubt that will cause many problems

    5 hours ago, ShaunR said:

    LabVIEW, as it is now, is slowly being grandfathered in favour of LabVIEW.NET.

    With your past comments on .net I'm assuming you meant this to be insulting, but I honestly think this is a better name. For one thing it rolls off the tongue better than labview nxg, and more importantly it makes it sound like labview is trying to align itself to be more like 'a real programming language' which is nice.;)

  10. Sounds basically right, any file that was changed by two people simultaneously will need some kind of merge. In git its the responsibility of the person currently trying to commit.

    First they must pull the branch they want to commit to into their own to make sure they have a matching tree. That is, if master has commit 1,2,3 and your local copy only has 1,2 and then you committed 4, then you have to pull commit 3 into your repository so the trees match.If there are no files changed in 3 that you changed in 4 the merge happens seamlessly and then you commit your changes to the centralized repository so now you and the central repo have 1,2,3,4. If 3 changes file A and 4 changes file A then there is a conflict and you (developer of 4) must resolve it before pushing 1,2,3,4 and potentially 5 to the server. Its definitely a more complicated series of steps than you would usually have to deal with in SVN because of the locking, but its similar in the conflict scenario. Its also the way all merging works, so once you know how to do it between your copy and a master repository, you can do it with branches on your own machine or with others' forks or whatever without much if any additional learning.

     

    The other thing I've been looking in to lately is git subtrees/submodules. They let you embed a repo in another one, so for example if you have a common library in your organization you could (instead of distributing as a package) distribute just as source embedded in the repositories it depends on. Git keeps track of what is what and if you make a change in the subtree it can be pushed back up to the main repo. That is if I have shared library A and consuming projects B, I could make bug fixes for A and push them back up to A without ever leaving the project I'm currently working on B. I haven't used it yet but it seems like an interesting tool for a category of shared tools. Obviously if the shared code is super stable there isn't much value, but... Has any git user tried this out, or does Hg/svn have similar functionality?

  11. I see, that makes sense. Yeah I wouldn't necessarily want to agree to that process either unless anyone here is aware of a code review tool that works nicely with labview. Having an integrated issue tracker can be nice though, as your commits can directly link to features.

    3 hours ago, hooovahh said:

    I've not used git, but I hear some of these pros/cons.  The one I've heard is that git is sometimes better because it forces you to communicate with your developers to know what they are working on.  But if this were a servers side SVN thing with locks you'd know what other people are working on because, it's locked.  If you need to work on something that is locked, ask the person that locked it if they are done and can unlock it.  I don't need to track down every developer and ask what they are working on or if I can work on a code module.  If something is locked I want to work on I talk to that developer about it, if it isn't locked I'm free to do whatever.

    Well that assumes they left it locked on purpose which is the annoying problem i face especially with folks swapping back and forth between git and svn or p4. They go on vacation and everything is locked, as an example. As for the communication argument I would never argue that the lockless approach makes that easier but I do see it as an easily solved problem. I'd prefer a middle ground like a 'git flag' or something on a section of code, but i have no idea how it would work..

  12. 3 hours ago, hooovahh said:

    I know this is off topic but I do see this issue once in a while but on Chrome when this happens for me, it doesn't do anything and my text is still there, so I just copy it, refresh the page, and make a new post pasting it in.  Are you saying you hit submit, the page refreshes, but your post isn't there?  Not that I'm equipped to troubleshoot these types of issues.

    usually when i hit submit and it fails it at least saves the latest draft, and refreshing the page goes right back to the draft. this time that failed. its the same general behavior you see.

  13. 16 hours ago, hooovahh said:

    That makes me think that maybe your role is more of an architect than a developer.  I didn't find the CLD very difficult personally.  It was just a single loop QMH using arrays of strings, and I finished early and spent the extra time double checking my work.  The CLA I barely passed, and worked up to the last minute.

    Well I had a reply but the website did that thing where hitting submit fails and I lost it. The short version is:

    -I know they adjust the difficulty pretty regularly, and I know they did so not long after I took my CLD, so maybe thats the difference.

  14. On 6/8/2017 at 9:53 AM, A Scottish Moose said:

    SVN, GIT is popular among the rest of the company (text based dev) but I haven't found any limitations in SVN that make the extra overhead of the GIT workflow worth it.  

     

    4 hours ago, A Scottish Moose said:

    I would be interested in a deeper discussion on a LabVIEW implementation of Git source control.  I am the primary developer here so the overhead doesn't make sense... yet... I can see where it would start to shine in a multi-developer environment.  A discussion on the pros/cons of SVN vs. GIT would be an interesting (if geeky) discussion.

    Below is not really a discussion of the pros and cons. To get a detailed and specific list, just google it. If there is a dead horse thats been beaten on the internet, its "git vs svn".

    What I'm confused about is the "extra overhead" comment. You've said it twice but haven't provided any example of what you mean by this. There is a learning curve, absolutely, as with any tool, but I don't really see much if any additional overhead.

    If you want to use git just like SVN with a central server there doesn't seem to be anything stopping you and the workflow is similar (you have to press "push" before every commit and "pull" before you begin work, but...). This is primarily how I've used it, with the additional features of local commits and branching (I like to commit several times per day even if the code is still broken). Because the projects are small, I've only ever had a few conflicts over the course of several years. I think this is the situation most of us are in, less than 5 people working on a given project.

    Bigger projects with labview would have slightly more challenges with regards to coordination because of the lack of locking, but this is solvable through two things you should be doing anyway: communicating with your coworkers about what they are doing, and breaking things down into smaller projects. The best example I have of this is https://github.com/LabvieW-DCAF/ . There are 51 repositories by my count and while some are non-code (documentation, build scripts, etc) the development granularity keeps things easier to manage regardless of what source control you use.

    And thats actually the point I wanted to make...I started out using SVN, I used perforce and currently use it for one project, and I've used git whenever possible since I switched over to it. Fundamentally the procedures involved in any of them are pretty similar (as shaun said, 'commit and revert'), but I keep returning to git because the 'killer feature' is serverless and non-locking usage. When I work on a feature the entire codebase belongs to me

  15. I'm guessing its deliberately not replacing vipm for current lv. You can download and install the builder tools for current labview but it seems to exclude the ability to select by labview version, symbolic paths, etc. If you peek at the available packages you'll note that there seems to be a separate package for each supported labview version. Surely this is not a use case they overlooked, so the only conclusion I can draw is that they excluded it on purpose.

  16. 9 hours ago, pawhan11 said:

    Ot of couriosity, Can VIM be packed into lvlibp?

    When I tried to do that with xnode I cold not, I got to conclsion that xnode was some form of lvlib iteslf...

    Since it is an edit time construct, I would assume you can use it in an lvlibp but you can't expose a function from an lvlibp which adapts to type.

    8 hours ago, ShaunR said:

    I think the incidence of that use case is overstated and probably only applicable to things like OpenG. 

    FPGA works too, and I've got a few things in an internal library which are basically typed versions of the same code. I suppose there might be a way to nicely oopify it but I'd rather use these.

  17. As for the structure, you may wish to watch the JeffK+Mercer presentation at NI week you can get here

    My understanding: basically it is a disable structure where instead of manually enabling/disabling, the compiler will run through all cases and enable the first case which doesn't cause a compiler error. When used in conjunction with a vim you can do fancy things. For example if you wanted to make 1 node that did "concatenate stuff" you could have two cases, 1 assuming the inputs are strings and 2 assuming they are scalars or arrays. If the type passed in is not a string, that case1 will cause a compiler error and it will go on to case2 with the more flexible build array, which will compile. In the NI week presentation it sounded like it was mostly solid but too early to be comfortable throwing it out to the masses yet. 

    • Like 1
  18. 20 hours ago, Mads said:

    The MDI/tabbed interface solution for VIs seems to be one of the most fundamental flaws of NXG GUI.

    You can have multiple windows with different tabs, and you can have a split-screen code/ui on one tab, so its certainly still possible to follow most of the same workflow. Like many people I like the tabbed interface and think its a significant improvement over the endless cascade of variable-size variable-position windows. How many times have you opened up someone else's code only to have it pop up off screen because they had more/bigger monitors than you? I've made a script to fix this on a folder of code, it happens so often.

    Something else that may interest you is the data capture features, at least from a testing perspective: http://www.ni.com/documentation/en/labview/1.0/data/capturing-data/ and http://www.ni.com/documentation/en/tech-preview/1.0/data/using-analysis-panels-to-process-data/
    When I actually get down to doing some math (eg analyze this waveform) I often end up tweaking things. The idea of being able to capture some representative data set, apply it to a math function, capture the output, tweak the math, capture the new output, and compare the results seems like a nice tool to have.

    9 hours ago, ShaunR said:

    That's not happening this time and probably hasn't happened for many years. We've seen LabVIEW stagnate with placebo releases and there are so many maker boards for less than $100, hell, less than $10, it's no longer funny (and you don't need $4000 worth of software to program them). I while ago I put a couple of Real-time Raspberry PIs in  system instead of NI products. You love Arduino, right?

    This is an attitude that i remember hearing all the time and still interests me. I mean, an arduino, beaglepone, or raspberry pi is definitely cheap, but what can it actually do that you would otherwise use a cRIO for, or that would be generally useful in your work? I understand the hobby angle, but...what on earth did you use them for in your actual job?

  19. 2 hours ago, hooovahh said:

    I haven't looked at the XML structure much yet, but lets say each object has an offset from origin, telling it where it should be on the block diagram.  .

    I was just thinking about this too. I believe at present this is right, but it seems like it would make sense to separate the logical linkages from the decorations. I mean the 'code' is just the nodes and wires. We want to describe the appearance of those too but its completely irrelevant to the 'code'.

    1 hour ago, Mads said:

    it should allow you to do 99% of what you want to do *graphically*. The HTML should be accessible too, sure, but not "in your face". 

    I'm sure this is still an eventual goal, but the stuff they mentioned with regards to editing the html manually was all relatively specific and advanced, like dropping in an embedded youtube or maps element. I would assume that the next first step would be a drop down "script control", etc. etc.. I suppose they could use this as a crutch to avoid developing features, but I'd much rather be able to fix an issue by editing the html than relying on NI to fix a bug and push a patch.

  20. 1 hour ago, ShaunR said:

    Here's a funny anecdote. I once knew of a very, very large defence company that decided to rewrite all their code in C#. They tried to transition from C++ on multiple platforms to C# but after 6 years of re-architecting and re-writing they canned the project because "it wasn't performant" and went back to their old codebase.

    This is a pretty famous problem within large projects, as with netscape, word, mozilla, etc: https://news.ycombinator.com/item?id=2139176 or https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/

    I'm not convinced that NI hasn't done the same to themselves by neglecting their existing customers in favor of...whoever ngx is supposed to benefit...but I don't think you can blame the language, especially one as large as c# (which includes the core language, the base libraries on top of it, and then the ui frameworks asp/wpf/winforms). I'm sure there are similar anecdotes of people who moved from c to c++ and it was 'too slow', or people who never got their product to market because they spent too much time trying to understand their c++ compiler errors.

    1 hour ago, ShaunR said:

    Well. LabVIEW was the best cross development platform. That is why I used it. I could develop on Windows and, if I managed to get it installed, it would just work on Linux (those flavours supported).

    Fair enough -- one thing I wondered is why they havent gotten around to cross compilation now that the compiler is llvm. Clearly they know how to cross compile -- they do pharlap, linux-arm, linux x64, and VxWorks without too much of a problem...but no obvious cross-compile for desktops.

    • Like 1
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.