Jump to content

Data Variable Toolkit


Recommended Posts

Hello Everyone,

I have created this feature to create named variables of any data type in memory and access its value from any part of data code which is under same scope using its name.
This variables stores instantaneous value. Best use case of this toolkit is acquire data set variable values & read from any loop. Do not use for Read-Modify-Write
Once variables are created in memory, you can be grouped them and access its values using names.
You can create variable for any data datatype & access its value using its Name. I have tested this toolkit for memory & performance, which is much faster than CVT & Tag Bus Library

Please check and let me know your suggestions. use LabVIEW 15 sp1

BR,
Aniket Gadekar,
aniket99.gadekar@gmail.com

DataVariableToolkit.zip

  • Like 1
Link to comment

Thanks for your contribution.  A couple of things.  Using polymorphics for this type of thing can become a pain pretty quickly.  I had a similar thing for reading and writing variant attributes and used scripting to generate the 60+ data types I supported.  But even then there were times that the data type wasn't supported.  This also added 120+ extra VIs (read/write) adding to loading overhead.  The more modern way of doing this is with a VIM that adapts to the data type provided.  Your VIs were saved in 2015 when VIMs weren't an official thing, but you say you use 2018 where it is.  Back then I'd still support doing this data type adaption with XNodes.  Posted here is my Variant Repository which does similar read/write anything including type def'd enums and clusters.

Putting these in a global space that any VI can read and write from is pretty trivial.  All that is needed is a VIG, functional global variable, or even global variable in a pinch.  This will keep data in memory as long as the VI is still in memory and reserved to be ran.  This has other benefits of easily being able to load and save data from a file since it is all contained in a single place.  Also with this technique there are no references being opened or closed, so no memory leaking concerns.  Performance-wise I also suspect your method may have some room for improvement.  If I am writing 10 variables 10 times, looking at your code that will mean 100 calls to the obtain notifier, 100 calls to the send, and 100 calls to the release notifier.  I suspect reading a variant, and then calling the set attribute 100 times will likely take less time and processing power.

Link to comment
14 hours ago, Aniket Gadekar said:

I have tested this toolkit for memory & performance, which is much faster than CVT & Tag Bus Library

So I took a quick peek, nothing too detailed, but from what I saw there is pretty much no way this is unequivocally faster than the CVT or the tag bus. With the CVT it might be possible this approach is faster if you give it the worst possible load (10000000 simultaneous readers, each of which is reading a random, constantly changing name), but in any sane case you'd lookup a reference beforehand and then write to those, and the write time is bounded at basically the same performance as any reference-based data access. For tag bus...its literally just a cluster with an obscene number of different data types in big arrays. Data access is just indexing an array. There is no way in labview for data access to be faster than indexing an array. In contrast you are obtaining a queue by name which involves several locks, doing a lookup, and writing to the queue which requires another lock. The CVT only needs 1 lock and tag bus requires zero. Memory I'll give you.

Its also worth you looking at this:

 

Link to comment
21 hours ago, hooovahh said:

Thanks for your contribution.  A couple of things.  Using polymorphics for this type of thing can become a pain pretty quickly.  I had a similar thing for reading and writing variant attributes and used scripting to generate the 60+ data types I supported.  But even then there were times that the data type wasn't supported.  This also added 120+ extra VIs (read/write) adding to loading overhead.  The more modern way of doing this is with a VIM that adapts to the data type provided.  Your VIs were saved in 2015 when VIMs weren't an official thing, but you say you use 2018 where it is.  Back then I'd still support doing this data type adaption with XNodes.  Posted here is my Variant Repository which does similar read/write anything including type def'd enums and clusters.

Thank you for your attention & time. I think using variant data type you can read/write any data type. Also, Most of the users are still using LV2015 so i kept this code in 2018. 

 

21 hours ago, hooovahh said:

Putting these in a global space that any VI can read and write from is pretty trivial.  All that is needed is a VIG, functional global variable, or even global variable in a pinch.  This will keep data in memory as long as the VI is still in memory and reserved to be ran.  This has other benefits of easily being able to load and save data from a file since it is all contained in a single place.  Also with this technique there are no references being opened or closed, so no memory leaking concerns.  Performance-wise I also suspect your method may have some room for improvement.  If I am writing 10 variables 10 times, looking at your code that will mean 100 calls to the obtain notifier, 100 calls to the send, and 100 calls to the release notifier.  I suspect reading a variant, and then calling the set attribute 100 times will likely take less time and processing power.

I tried to avoid FGV in this feature/toolkit due to some limitation of it. If you wanted to load/read variables from file you can develop that logic outside. I think obtaining queue/notifier reference, set value & close reference for 100 times is much cheaper than calling FGV 100 times. I saw your "Variant Repository" and it was nice, But each time you have to connect wire and pass data in subVI's, to perform read/write operation. Where as in my toolkit you don't have to at all. :)

Again, this feature can store large data also and can be accessed by Named reference.

Please let me know if you have any suggestions.

Link to comment
13 hours ago, smithd said:

So I took a quick peek, nothing too detailed, but from what I saw there is pretty much no way this is unequivocally faster than the CVT or the tag bus. With the CVT it might be possible this approach is faster if you give it the worst possible load (10000000 simultaneous readers, each of which is reading a random, constantly changing name), but in any sane case you'd lookup a reference beforehand and then write to those, and the write time is bounded at basically the same performance as any reference-based data access. For tag bus...its literally just a cluster with an obscene number of different data types in big arrays. Data access is just indexing an array. There is no way in labview for data access to be faster than indexing an array. In contrast you are obtaining a queue by name which involves several locks, doing a lookup, and writing to the queue which requires another lock. The CVT only needs 1 lock and tag bus requires zero. Memory I'll give you.

Its also worth you looking at this:

 

Thank you for your kind attention & valuable time.

I checked this functionality, In this you are using Queued message handler to request for read/write. I thing it will increase latency to read/write variable if there are many request. We never know how many times developer will call read/write operation. So instead, I obtained reference directly without any message request. And VI Register uses implementation similar to FGV, which i tried to avoid.

This toolkit strictly not used for Read-Modify-Write operations, Since each feature can have pros-cons. Again, Thank you for your time. Please let me know if you have any improvement suggestions.

:)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Similar Content

    • By TDF
      TDF team is proud to propose for free download the scikit-learn library adapted for LabVIEW in open source.
      LabVIEW developer can now use our library for free as simple and efficient tools for predictive data analysis, accessible to everybody, and reusable in various contexts.
      It features various classification, regression and clustering algorithms including support vector machines, random forests, gradient boosting, k-means and DBSCAN, and is designed to interoperate with the Python numerical and scientific libraries NumPy and SciPy from the famous scikit-learn Python library. 
       
      Coming soon, our team is working on the « HAIBAL Project », deep learning library written in native LabVIEW, full compatible CUDA and NI FPGA.
      But why deprive ourselves of the power of ALL the FPGA boards ? No reason, that's why we are working on our own compilator to make HAIBAL full compatible with all Xilinx and Intel Altera FPGA boards.
      HAIBAL will propose more than 100 different layers, 22 initialisators, 15 activation type, 7 optimizors, 17 looses.
       
      As we like AI Facebook and Google products, we will of course make HAIBAL natively full compatible with PyTorch and Keras.
       
      Sources are available now on our GitHub for free : https://www.technologies-france.com/?page_id=487
    • By mhsjx
      Hi,
      I'm a beginner in labview, and now test cRIO about two weeks. I still can not solve the problem. I attach my test project for explanation.
      I want to realize that , for example, with time sequence t1, t2, t3, t4,  DO outputs T, F, T, F, AO1 outputs A1, A2, A3, A4, AO2 outputs B1, B2, B3, B4, and the delay of AO1 and AO2 should as small as possible(AO1 and AO2 may comes from difference modules).
      I search in Google, NI forum, and decide to use for loop and loop timer in FPGA.
      The reason as follow:
      1. To realize the specific time interval, I can use Wait and Loop timer. But in "FPGA 0--Test DO.vi", it can't not realize specific time interval by several us's error(maybe large). And to complete once of while loop, it needs 134us. I can't explain that it can realize time interval below 134us, even I acturally realize a delay of 10us, but the input is not acturally 10us, so it's not accurate. 
      And by NI example, I use the Loop timer.
      2. In "FPGA 1--Test DO and AO.vi", I find that the loop timer helps me to realize accurate time interval, however, it ignore the first time interval. Such as, t1, t2, t3, t4, with disired output A1, A2, A3, A4. It goes A1(t2), A2(t3), A3(t4), A4(t1). And in "FPGA 2--Test DO and AO.vi", it has same problem. DO0 and AO1 goes A1(t2), A2(t3), A3(t4), A4(t1). And AO0 is always ahead of DO of t1. 
       
      The people of NI forum advice that I should put AO0 and AO1 into one FPGA/IO node and use SCTL. But up to now, I don't find any example about it(Google or NI forum, maybe it's primary).  Mainly that AO0 and AO1 must go with different timeline, the dimension of input array is different.  Can anyone offer advice for me?
      Thanks
      Test.7z
    • By kpaladiya
      I would like to build a model using image data and NI-cRIO-9063 and NI 9264 for voltage control.
      for image, I made a script in python using OpenCV libraries that detecting some points . For voltage control, I use cRIO-9063 with NI 9264 voltage controller.
      My question is, I am new in LabVIEW and I don't have any idea how can I make a loop for voltage control in python. Is there any library available in python that directly connect cRIO and NI 9264 devices? if not then how can I combine my image data(which is in python) with cRIO device? I need argent help.
    • By Makrem Amara
      Hi there,
      I am working on a machine vision project with LabVIEW.
      The camera will locate some parts and send their coordinates via TCP/IP 
      and I created a client also with LabVIEW to display these coordinates here is how the communication going.
      First, if the camera detects something then msg will be sent to the client to inform him.
      then if msg was received correctly client responds with another msg to request the coordinates. 
      at last, the server sends the coordinates to the client. 
      here I faced some problems
      1- the msg sent are with variable length ("x=0,y=0,Rz=0"==> "x=225,y=255,Rz=5" ==> "x=225,y=255,Rz=90"  length vary between 16 and 22 ) with the constant "byte to read " it will not display the full msg.
      2-the client works fine but at a certain time, it shows errors like ("LabVIEW: (Hex 0x80) Open connection limit exceeded";;;;;; "LabVIEW: (Hex 0x42) The network connection was closed by the peer. If you are using the Open VI Reference function on a remote VI Server connection, verify that the machine is allowed access by selecting Tools>>Options>>VI Server on the server side"
       
       





    • By drjdpowell
      I am just starting on trying to be able to use Python code from a LabVIEW application (mostly for some image analysis stuff).  This is for a large project where some programmers are more comfortable developing in Python than LabVIEW.  I have not done any Python before, and their seem to be a bewildering array of options; many IDE's, Libraries, and Python-LabVIEW connectors.  
      So I was wondering if people who have been using Python with LabVIEW can give their experiences and describe what set of technologies they use.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.