Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by smithd

  1. You could try going back to the older .net version: https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019LDcSAM The Gzipstream msdn doc says: Starting with the .NET Framework 4.5, the DeflateStream [which~=Gzipstream] class uses the zlib library for compression. As a result, it provides a better compression algorithm and, in most cases, a smaller compressed file than it provides in earlier versions of the .NET Framework. I'm assuming that a zip is a zip and the algorithm doesn't matter, but maybe something else changed then as well.
  2. Theres also the argument that you should just get rid of any error wire that has no immediate purpose. So...open file? wire it up. Write to file? wire it up. Close file? Why do I care if close file has an error? Nothing hurts my soul more than someone making a math function that has an error in, a pass through to error out, and a case structure around the math. Whyyyyyyy?
  3. I sometimes get something like B when I start with A and then I ctrl+drag up in the wrong place :/ Personally I only ever get A, because I use block diagram cleanup, because life is too short. People have a semi-constant level of irritation with me and my code as a result, but
  4. Well to be fair, there are a lot of variants: https://en.wikipedia.org/wiki/Cyclic_redundancy_check#Polynomial_representations_of_cyclic_redundancy_checks The comment was intended more in the sense of "I can't be assed to figure out which variant modbus uses when I have a functional implementation right over here"
  5. yay! please don't look inside some of the code is terrible In all seriousness, I think a lot of the class stuff was overkill for what the code needed to do, but most of it doesn't cause a problem. The biggest issue is the (positive) feedback I posted on yours -- I did the serial stuff wrong, in my opinion, by trying to follow the spec rather than doing it the more pragmatic way (looking at the function code and parsing out the length manually). A series of leaky abstractions all the way from linux-rt up through visa and into the serial layer in that library led to a ver
  6. if I understand you correctly you want to name the field in your cluster <JSON>field2. That is you start with a cluster on your block diagram: {field1:dbl=1.0,<JSON>field2:string="{"item1":"ss", "item2":"dd"}"} When you call flatten to json you get: " {"field1": 1,"field2": { "item1":"ss", "item2":"dd"}} " because the library automatically pulls off the <JSON> prefix and interprets that whole string as JSON. When you unflatten the reverse happens,
  7. I'd suggest: Trying it with a VI that just runs a loop with the i terminal going to an indicator OR: Set the subpanel instance to allow you to open the block diagram (right click option) and then open the VI at runtime, verifying that its actually executing correctly -- maybe it got stuck and thats why everything is unresponsive Checking to see if the variant to data function is producing an error or warning. I seem to recall labview not liking to convert strict VIs into generic VIs It looks like you are closing the VI front panel? Sometimes that can cause a VI t
  8. I dont understand why these need a tree structure? Just for groupings and the like? In any case, sounds like TDMS could do the job, and it even has an in-memory version: http://zone.ni.com/reference/en-XX/help/371361M-01/glang/tdms_inmem_open/ You can also use sqlite in-memory although sadly it has no array support for the configuration use case the lava json library as I recall unflattens json into an object tree.
  9. Moxas units look quite nice. I usually go with ethercat because it's so dead simple but their Ethernet modules (ioLogik) come close to swaying me. To add to the list, beckhoff is best known for ecat but has bus couplers for profibus, serial, can open, devicenet, Ethernet IP, and modbus tcp. I also quite like their I/o physical design. Along the lines of hooovahhs cheaper unit, you might look at these guys (raspberry pi): https://www.unipi.technology
  10. Do you have performance requirements? (acquisition rate, throughput, latency, buffer size, etc?) For example you say "distributed" "automation" tasks which to me says 10-100 Hz, single point measurements, continuously, forever. So for this I would probably look at an ethernet RIO 9147 or ethercat (what you linked)... But you are using cdaq which is not really an industrial automation device so I'm confused about what you want. You may also want to add a budget, since for example a cDAQ unit is like $1200 which in the US is like a day of an experienced engineer's time...in a lot of ca
  11. FPGA compilation is basically a simulation so you can get the clock rate from that and use cycle-accurate simulation of the fft core to determine throughput performance. So if the calculation can be buffered, I think we all collectively want to know what your control requirement is. From detecting a 'bad' value on your camera, how long do you have to respond? If you have 100 usec latency budget it doesnt matter that the gpu can do the processing in 4 usec -- it might not even get there in time. However a fpga card with the camera acquisition and processing happening on the same device mak
  12. Re gpu: 2048 16-bit ints is 4096 bytes, per 4usec is 1,024,000,000 bytes/sec or 976 MB/s. Except its both directions, so actually ~2 GB/s. If youre using a haswell for example (PCIe v3) thats 3 lanes already...without giving your GPU any processing time. A x16 card would give you 3.5 usec, assuming the cuda interface itself has no overhead. As mentioned above it also depends on the rest of your budget -- whats the cycle time, how much time are you allocating for image capture itself, and what do you need to do with that fft (if greater than x write boolean out? send a message? etc)?
  13. yeah that pretty wildly depends on where that 4 usec requirement comes from and what you want to do with the result. It seems like an oddly specific number. In any case, with sufficient memory on the fpga I believe you can do a line fft in a single pass (although I can't remember for sure) although its several operations so you'd have to do a lot of parallelization, clock rate fiddling, etc -- at default rates, 4 usec is just 160 operations. Your best bet would probably be to look at the xilinx core (http://zone.ni.com/reference/en-XX/help/371599N-01/lvfpgahelp/fpga_xilinxip_descriptions/
  14. A very possible answer is: ghosting However, there are a lot of unanswered questions in your post. For example: Do you have Mod2 physically wired to Mod3, or do you have a signal generator hooked up to mod3? Whats passed on the "Input_FPGA_Cluster"...which is actually the output data? Is is constant? Changing? When you scale it by 15, what is the value? Can it be represented by that fixed point type? How is the input oscillating? By how much? How completely incorrect is it? What are your AI voltage ranges set to? Are they differential or single ended? My suggestion? Wire AO0 to AI0
  15. I feel like you are conflating OOP with "correct". OOP is a way to accomplish tasks, but it is not the only good way. More to the point, if you don't know much about how to implement it, it seems to be a better plan to code the way you know as a starting point, while simultaneously trying to learn from other example applications what problems OOP solves. Then if you reach one of those problems in your code you can just refactor the code enough to solve the problem. The alternative, deciding "I want to use oop" and then building your code can be successful, but it can just as easily end up in a
  16. Maybe by 2020 they'll add some nxg-style slide-out palette animations
  17. They're trying to ease you into nxg To give a real answer: I didn't notice anything different in 2018, but my main version is 2017.1
  18. As a just for fun test, I'd suggest maybe adding an always copy dot to the class and variant wires here: This will probably do nothing, but the always copy dot is a magical dot with magical bug-fixing powers, so who knows. You could also pull the variant-to-data function inside of the IPE structure. Fiddling around with that stuff may trick labview into compiling it differently and help narrow down whats going on...and it takes 5 minutes to test. As to your question about it being RT specific...I've never heard of such a thing, but have you tried your simple counter modul
  19. I see that they made the same choice NI did on that as well...limited to "+Infinity" and "-Infinity" -- it would be nice if it were more accepting (eg "Inf", "-Inf", "+infinity")...same thing with booleans. If I type something manually I always forget if its "true" or "True", and I often forget that it matters. Probably silly to do so, but I eventually just edited jsontext in a branch to support the different cases. Of course I admit you do eventually reach Yaml-level of parsing difficulties: y|Y|yes|Yes|YES|n|N|no|No|NO |true|True|TRUE|false|False|FALSE |on|On|ON|off|Off|OFF
  20. Dunno if this came up elsewhere, but I just stumbled across this: https://json5.org Its a "standard" only so far as "some person put something on github", but it might be nice to adjust the parser half* to accept it, if that fits your use cases -- the changes seem pretty simple and logical. I do hand-write some of my config files, but most of the time I just write a "make me some configs.vi" function and leave it at that. Just thought I'd share. *ie "be conservative in what you do, be liberal in what you accept from others"
  21. You could also just make a quick drop shortcut. quick drop itself uses a method (can't remember the name) to pop an item onto your mouse cursor. You could take the scripting code from the right click tool, look for any property nodes, and if you find one add a local variable onto the mouse pointer. https://labviewartisan.blogspot.com/2012/11/getting-started-with-custom-quick-drop.html
  22. I wonder who would go through the trouble of embedding secret code into the vi when you can just put code on the block diagram and hide it behind a structure. Well thats kind of always been the case regardless of this vulnerability. Its code, you should only run code from trusted sources or after inspection. Its also funny that the vulnerability page shows labview nxg which gets rid of the VI format entirely
  23. Can you explain your goal? Do you want a continuous stream of images to python, or just a sequence? Whats the purpose of the time interval array? Are those actually all different delays or fixed? Is there a reason your camera is plugged into a cRIO rather than your computer with the python script? What is the latency you can permit between acquiring some images and getting them on the python side? If you can plug your camera into the computer I'd just use: https://pypi.org/project/pynivision/ If you need to stream them for whatever reason, and the time intervals are all constant, I'd
  24. https://forums.ni.com/t5/LabVIEW-Shortcut-Menu-Plug-Ins/Replace-Value-Property-with-Local-Variable-llb/ta-p/3538829 you may wish to edit it to support all property nodes, not just "value" props, but thats not hard (its most likely just deleting a bunch of code that filters the results). start here: https://forums.ni.com/t5/LabVIEW-Shortcut-Menu-Plug-Ins/NIWeek-2015-Presentation-on-Shortcut-Menu-Plug-ins/ta-p/3521526 install: https://forums.ni.com/t5/LabVIEW-Shortcut-Menu-Plug-Ins/How-to-install-plug-ins-that-you-download/ta-p/3517848
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.