Jump to content

JKI NIWeek 2016 Presentation Videos


Tomi Maila

Recommended Posts

We recorded all JKI presentations and have made them available online. AI and deep learning with Javier Ruiz & Ian McFarlane, .NET integration and interface design with Sarah Zalusky, and Caraya unit testing with Jim Kring. Good stuff!

Caraya: A New Take on LabVIEW Unit Testing - TS9754
by Jim Kring

Designing a LabVIEW Interface for .NET Applications - TS9757
by Sarah Zalusky

Artificial Intelligence With LabVIEW: Deep Learning-Based Classification and Control - TS9758
by Javier Ruiz & Ian McFarlane

Watch videos

 

 

 

Edited by Tomi Maila
  • Like 2
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Similar Content

    • By AndyS
      Hi!
      I have to convert a dynamically generated array into a JSON string and back. Unfortunately I found that the un-flatten method loses the variant data. See the screenshot of FP and BD and the comments inside.
      JSON_Text_test.vi
       

       
      Is this a bug in JSON Text or is my data-construction not supported as expected? In case of the letter I have modify huge parts of my code. So I hope that it is a bug 😉
       
      The 2nd thing I recognized is that the name "Value" of the cluster is not used during flatten. Instead the name of the connected constant / control / line is used. I found the green VI ("Set Data Name__ogtk.vi") at OpenG Toolkit that allows me to programmatically set the variant data name. As you can imagine I would prefer not to need the OpenG VI.
       
      Thanks in advance for your kind help 🙂
       
    • By kartik.azista
      HAs anyone tried creating a sub vi programmatically by selecting the set of blocks through scripting?????
    • By TDF
      TDF team is proud to propose for free download the scikit-learn library adapted for LabVIEW in open source.
      LabVIEW developer can now use our library for free as simple and efficient tools for predictive data analysis, accessible to everybody, and reusable in various contexts.
      It features various classification, regression and clustering algorithms including support vector machines, random forests, gradient boosting, k-means and DBSCAN, and is designed to interoperate with the Python numerical and scientific libraries NumPy and SciPy from the famous scikit-learn Python library. 
       
      Coming soon, our team is working on the « HAIBAL Project », deep learning library written in native LabVIEW, full compatible CUDA and NI FPGA.
      But why deprive ourselves of the power of ALL the FPGA boards ? No reason, that's why we are working on our own compilator to make HAIBAL full compatible with all Xilinx and Intel Altera FPGA boards.
      HAIBAL will propose more than 100 different layers, 22 initialisators, 15 activation type, 7 optimizors, 17 looses.
       
      As we like AI Facebook and Google products, we will of course make HAIBAL natively full compatible with PyTorch and Keras.
       
      Sources are available now on our GitHub for free : https://www.technologies-france.com/?page_id=487
    • By mhsjx
      Hi,
      I'm a beginner in labview, and now test cRIO about two weeks. I still can not solve the problem. I attach my test project for explanation.
      I want to realize that , for example, with time sequence t1, t2, t3, t4,  DO outputs T, F, T, F, AO1 outputs A1, A2, A3, A4, AO2 outputs B1, B2, B3, B4, and the delay of AO1 and AO2 should as small as possible(AO1 and AO2 may comes from difference modules).
      I search in Google, NI forum, and decide to use for loop and loop timer in FPGA.
      The reason as follow:
      1. To realize the specific time interval, I can use Wait and Loop timer. But in "FPGA 0--Test DO.vi", it can't not realize specific time interval by several us's error(maybe large). And to complete once of while loop, it needs 134us. I can't explain that it can realize time interval below 134us, even I acturally realize a delay of 10us, but the input is not acturally 10us, so it's not accurate. 
      And by NI example, I use the Loop timer.
      2. In "FPGA 1--Test DO and AO.vi", I find that the loop timer helps me to realize accurate time interval, however, it ignore the first time interval. Such as, t1, t2, t3, t4, with disired output A1, A2, A3, A4. It goes A1(t2), A2(t3), A3(t4), A4(t1). And in "FPGA 2--Test DO and AO.vi", it has same problem. DO0 and AO1 goes A1(t2), A2(t3), A3(t4), A4(t1). And AO0 is always ahead of DO of t1. 
       
      The people of NI forum advice that I should put AO0 and AO1 into one FPGA/IO node and use SCTL. But up to now, I don't find any example about it(Google or NI forum, maybe it's primary).  Mainly that AO0 and AO1 must go with different timeline, the dimension of input array is different.  Can anyone offer advice for me?
      Thanks
      Test.7z
    • By kpaladiya
      I would like to build a model using image data and NI-cRIO-9063 and NI 9264 for voltage control.
      for image, I made a script in python using OpenCV libraries that detecting some points . For voltage control, I use cRIO-9063 with NI 9264 voltage controller.
      My question is, I am new in LabVIEW and I don't have any idea how can I make a loop for voltage control in python. Is there any library available in python that directly connect cRIO and NI 9264 devices? if not then how can I combine my image data(which is in python) with cRIO device? I need argent help.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.