Jump to content

Omar Mussa

Members
  • Content Count

    291
  • Joined

  • Last visited

  • Days Won

    10

Omar Mussa last won the day on August 24 2016

Omar Mussa had the most liked content!

Community Reputation

33

About Omar Mussa

  • Rank
    Very Active

Profile Information

  • Gender
    Not Telling

Contact Methods

LabVIEW Information

  • Version
    LabVIEW 8.6
  • Since
    1998

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I agree about keeping the sim related config separate - I have taken this approach and found it useful but it can also be tricky to ensure that configuration data that is added during development to the simulation files is merged into the deployment folder. I've personally found BeyondCompare to help solve this problem but it is a manual step that has been hard to enforce with process. The article you suggests looks really promising - I think it was exactly what I was looking for but failed to find by my own searching. I think this is going to help me avoid going down a fairly messy rabbit
  2. We have a large LabVIEW project that is structured in multiple Git repos using Git submodules. Within each Project we have our source organized into folders like this: Project Folder Configuration Data Folder Source Code Folder We typically do development on our development machines where the Configuration Data Folder contains configuration files that are in simulation mode and using atypical configurations. We also deploy our development system onto tools during development and testing by cloning the repo onto the hardware supported platforms. On these machines, the config
  3. I personally think of TestSuites as a way to group a bunch of tests into a common test environment rather than as a way to reuse a TestCase for testing multiple parameters. But I can definitely see the value of being able to set the displayed name of the test case on the VI Tester GUI and how it would help handle the situation you raise of re-running the same TestCase with multiple parameters. The simplest way that I see to solve this problem is to just have 3 TestCases that share some common test code so that you can have easy to debug and use the three test cases to define the tests fo
  4. I think another way to do Linux on Windows now is via Docker Linux containers... Also, Windows 10 now supports running Linux as a subsystem: https://msdn.microsoft.com/en-us/commandline/wsl/install_guide
  5. I would do the following: Setup my Parallels VM to use Wi-FI from MAC in bridged mode Make sure my Windows VM network adapter config is correct to reach the myRIO Ping the myRIO - if no response then you have a network config issue If still not working I would shut down the VM, restart the Mac and then start the VM. Ping the myRIO - you should definitely get a response Connect (One other thought, you MAY also have a Windows Firewall issue, can retry with Firewall off if all of these steps fail but I don't think you should need to do that)
  6. That's interesting and I definitely have never thought of that. Do you know what the differences are between the Matrix data type and Array data type? I tried a couple of OpenG 2D array operations out and they all seem to work ok on the matrix.
  7. Also - since you're using python for the data processing you can use python's web server and avoid the legacy nightmare that is ActiveX.
  8. I'm using plotly right now in an offline project (via the JS API) for displaying data parsed from csv data files on a Windows (10) machine. I embed a .NET browser on a VI and load a very basic html file that runs some javascript. From there, the way I see it there are two pathways: 1 - Run a local http server where the data folder is so that you can get the files through AJAX calls to get the file data. 2 - And I know this sucks but - utilize ActiveX so that you can use the file scripting object to read data files. I have my code setup to poll a specific file for date changed and
  9. I think that in this case, the OS flag is OK even if its project only since you (typically?) need to be in a project to work on a target's code anyway (as far as I understand how working with targets works). I'm not sure - actually think I may have messed that up and it may have just worked for the case where the dependency already existed where expected. I think VIPM builld process may grab it if its in the source folder of the package, not sure - would be definitely worth testing by deploying package to a new machine.
  10. That's great! One other thing you might try - I think the build process should also work if the Conditional structure default case was "empty path" and that the "Linux" path was as you coded it - in that case it avoids the unnecessary hardcoded path to the .SO file in the unsupported cases (non-Linux). I think the code will still open as 'non-broken' if open on a Linux target context (as it should be) and it will open broken on a Windows context (as it should be). Its probably best that the code is broken when opened in an unsupported context - because its better to break at development tim
  11. I would check the "specify path on diagram" and try passing the path into the CLFN node and use a conditional disable structure to pass in the extension (or hard code it to only support Linux targets). Best practice would be to create a subVI with the constant so that you use the same path for all instances.
  12. FYI - I created a blog post on test driven development in LabVIEW using Caraya here: http://www.labviewcraftsmen.com/blog/tdd-in-labview-a-caraya-approach
  13. I've been using a mix of EtherCAT slaves (all third party, none are NI) for several years now connected to various NI cRIO controllers (9067, 9068, 9035) and I haven't had any major issues. So far, whenever I've had an issue its been because the vendor had an error in their XML file and contacting the vendor I've been able to get fixes in each case without too much hassle. I would warn that if you are using a 9068 you should be aware of this issue: http://digital.ni.com/public.nsf/allkb/9038F4D0429DD7C686257BBB0062D3F3 In my case this issue was a showstopper so we switched t
  14. This is really good info, thanks for sharing your experience. For #2, are you dynamically loading your FPGA Bitfile from disk (I'm guessing you are not)? I have a feeling that if you did this, the RAD image would work (your bitfile would just be another file transferred via the image and the RT app would load the FPGA on startup from a known location on disk - I haven't tried this but I'm guessing this is how it would work). Dynamically loading the FPGA might not be the best option for all deployment scenarios of FPGAs but it would probably insulate you from this issue.
  15. If you use a custom FPGA, you need to recompile it for any change (unfortunately, even cosmetic UI changes - unless you are running the FPGA emulated target which you can do to validate simple logic, etc before you compile for hardware). The RT code can run from source though without recompile (it gets deployed to RT target and then it runs and is debuggable more or less like desktop LabVIEW). I've personally had minor inconveniences where I had to reboot the target because the deployment for debugging failed but usually a target reboot is all that it takes to fix deployment issues duri
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.