Jump to content

Connecting Error Cluster to Bottom of VI


Recommended Posts

Hello All,

First time LAVA poster here with my first question. Why do some LabVIEW programmers insist on wiring the error cluster to the bottom of their VI as opposed to the sides as shown in most NI documentation. Is there any benefit to it? Is it 100% a preference thing? Is there a way to make LabVIEW connect error wires like this automatically?

I've only seen it in advanced LabVIEW code from experienced programmers and some parts of the Actor Framework.

Your insight and experience is appreciated! 

Capture.PNG

Link to post
Share on other sites

A good use case is when you encapsulate code in a case structure based on error/no error. Common in SubVIs. The error case will usually have the error wire running straight through, while the non-error case may have many VIs that *do stuff* and don't necessarily align error wires together. I don't usually drop the line back down to the base reference in between VIs, but there are times that I'll put a few in a row, drop back down, and then come back up for another batch. It's purely cosmetic. 

  • Like 1
Link to post
Share on other sites

Purely cosmetic.  That being said I do have a preference.  Inputs go in on the left, and outputs on the right, not from the bottom unless the terminal is on the bottom.  I think it is easier to read this way, but don't get hung up on it.

  • Like 1
Link to post
Share on other sites
11 hours ago, jhoehner said:

Capture.PNG

I sometimes get something like B when I start with A and then I ctrl+drag up in the wrong place :/

Personally I only ever get A, because I use block diagram cleanup, because life is too short. People have a semi-constant level of irritation with me and my code as a result, but :rolleyes:

Edited by smithd
  • Like 2
Link to post
Share on other sites

In some cases B is useful like said previously in case structure. especially when type def constant are used and over lap the error and other wire.

cosmetic, but code readability is not only cosmetic.

Link to post
Share on other sites

I generally prefer to lay out my block diagrams with the error cluster in a straight line. It provides a nice datum for aligning code and is the wire most common across diagrams. Having this datum on the diagram results in fewer wire bends which to me "feels right".

I found in the past that if I did not use the error cluster there was generally a slightly messier feel to the diagrams.

Completely personal preference though.

Link to post
Share on other sites

Thanks for the insights fellas. 

I think the main points are that it's mostly a preference based thing but could also help/hurt code readability depending on the situation. I will say, after programming this way for about a month (method B in the image above), I do find that I have less wires overlapping the error cluster wire especially when LabVIEW class wires are thrown into the mix. The nodes that live in between the class wire and error wire much more accessible when the error wire comes out of the bottom of the VI. I think my visual preference is to prevent overlapping wires whenever possible...

Different strokes I guess.

Thanks again!

Link to post
Share on other sites

Theres also the argument that you should just get rid of any error wire that has no immediate purpose. So...open file? wire it up. Write to file? wire it up. Close file? Why do I care if close file has an error? Nothing hurts my soul more than someone making a math function that has an error in, a pass through to error out, and a case structure around the math. Whyyyyyyy?

Edited by smithd
  • Like 1
Link to post
Share on other sites
On 8/24/2018 at 5:05 AM, smithd said:

Nothing hurts my soul more than someone making a math function that has an error in, a pass through to error out, and a case structure around the math. Whyyyyyyy?

It may be valid. The math function may behave badly on wrong or empty data sets. Sure you could detect the actual input arrays to be of valid sizes etc., and a solid algorithme definitely should do that anyways, but the error in is an immediate stop gap to trying to process on data that may have been not even generated because of an error upstream.

I understand that not every function needs an error in/error out but I prefer to have a superfluous error cluster on a VI that may never really be used for anything but dataflow dependency than none, and later when revising the function and having to add one anyhow, having to go everywhere and rewire the function.

Edited by rolfk
Link to post
Share on other sites

Another reason to have them (but a poor one really) is that using probes I can check the time it takes to execute functions.  These probes generally just operate on the Error data type, but with adaptive probes could likely be made for any anyway.

Link to post
Share on other sites
On 8/24/2018 at 6:18 AM, Neil Pate said:

Some interesting reading for those who have not seen it yet.

Turn off Automatic Error Handling :-)

 

An End to Brainless LabVIEW Programming.pptx

Yes and no. I find it useful to find the unhandled errors. When I build the final solution I have a scripting VI that turns it off along with others (like debug enable, VI short names etc).

19 hours ago, hooovahh said:

Another reason to have them (but a poor one really) is that using probes I can check the time it takes to execute functions.  These probes generally just operate on the Error data type, but with adaptive probes could likely be made for any anyway.

There are two error practices that may be being conflated from what you describe and what smithd is concerned with (and Darren describes in his talk) . Having a pass-through error (error in/out with no affecting code) and having a case structure around code with a switch on error; one case of which passes through.

Edited by ShaunR
Link to post
Share on other sites
On 8/22/2018 at 2:20 PM, jhoehner said:

Hello All,

First time LAVA poster here with my first question. Why do some LabVIEW programmers insist on wiring the error cluster to the bottom of their VI as opposed to the sides as shown in most NI documentation. Is there any benefit to it? Is it 100% a preference thing? Is there a way to make LabVIEW connect error wires like this automatically?

I've only seen it in advanced LabVIEW code from experienced programmers and some parts of the Actor Framework.

Your insight and experience is appreciated! 

Capture.PNG

The second is usually used when there are multiple inputs and the terminal spacing forces bends in the wires. If you are forced to have bends because otherwise the terminals would overlap to keep the tram-lines, then you might as well make more room on the diagram for labels and controls/indicators so it doesn't look cluttered.

Link to post
Share on other sites
2 hours ago, ShaunR said:

There are two error practices that may be being conflated from what you describe and what smithd is concerned with (and Darren describes in his talk) . Having a pass-through error (error in/out with no affecting code) and having a case structure around code with a switch on error; one case of which passes through.

Yes I was referring to smithd stating:

Quote

Theres also the argument that you should just get rid of any error wire that has no immediate purpose.

I was saying one reason (again not a great one but one nonetheless) is that if you do wire up errors with no immediate purpose, it does allow for checking the timing between nodes easier by using the error timing probe.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Similar Content

    • By the_mitten
      The introduction of parallel, read-only access for DVRs in LabVIEW 2017 adds a great deal of flexibility to using DVRs to monitor values in parallel executions of code. Fo\The downside of this, of course, is the necessity of using the In Place Element (IPE) throughout your code simply to read the value. Having IPEs throughout your code just to read a value both takes up block diagram real estate and also takes more clicks than desirable to insert.
      Similarly, though less frequently, there are times when you only need to update the value within a DVR without actually performing any logic inside of the IPE.  This situation is less frequent, at least for me, as I am usually using arrays or classes with DVRs such that I actually need to modify the existing data rather than simply replacing it.
      A more preferable solution to the above situations would be to have Read/Get and Write/Set VIs for the DVRs to simplify the process of working with them. This way, and IPE on the block diagram would only be needed when you were actually modifying the existing data within the DVR, rather than simply overwriting or returning the current value.
      Thanks to the power of malleable VIs and the type specialization structure that is now officially released in LabVIEW 2018, a better solution is now available. I’ve created two malleable VIs, Read  DVR Value (Parallel) and Write DVR Value that allow you to perform a write and a parallel read on any DVR data type.
       Now, you can use a single VI that you can insert via Quick Drop to read or to write DVR values.  
      Download the attached ZIP file to access the two malleable VIs and example code, and please let me know your thoughts in the comments!
       


      DVR Read and Write VIs 1.0.0.zip
    • By takanoha
      Hi 
      I have a simple program which has only 2 buttons for the user interface. When the user clicks OK I want the program to get into the event structure case called "OK Button". Once it is inside there is a loop which continuously waits for 1 second until "Stop Button" is called from the user. 
      But because once the user presses the "OK Button" the program gets into the event and therefore I can not call the "Stop Button". 
      Is there a way to call the "Stop Button" even if the program is inside the event ?
       
      Thanks

      event_out.vi
    • By Huqs
      Hello Labview Users, 
      I happen to have thousands of csv data file that I work with. The only way I recognize them is putting their characteristics in the file name. Which brings the problem of making the names too long and Microsoft doesn't like to accept long name. So I wanted to build a database for all my files. I am in the preliminary stage of building it ( I have attached the file and some of you may have seen it before). 
      What I want to do is, have all my files in the database with random names and list them based on their characteristics. I want to do that in my application in the place of 'file' box. So that I can click on the file and run it (double-click on the file in application to make them work in active file). based on the parameters listed on the database I want to filter them to find any specific file. How the interface of database should look like is shown blow (Image) . 
      It doesn't have to be a real database, just a directory application. I am trying to make it without the database toolkit.  If anyone can help me out and guide me out or guide me in the right direction then that would be great. Thanks. 

      Multicolumn list box v1.5.vi
    • By Zyl
      Hi everybody,
       
      I'm actually running on a problem with a  TCP connection between 2 cRIOS.
      One cRIO is a server which writes 76 bytes long messages every 10ms (today, but can be anything between 1ms and 1s) using STM Write VI (so at the end it pushes 82 bytes long message in the TCP write function). I want that message to be sent only if the client as time to read it, so I set the timeout to 0.
      The other cRIO is a the client with tries to read on the TCP link at a speed of 1000Hz (1ms, Wait next Multiple used to ensure that the loop is not running faster). I use the STM Read VI to get the data sent from the other cRIO. The read function has a timeout of 100ms.
      What I was expected is that the client loop would actually runs at 10ms rate (server writing rate) due to the 100ms timeout. And if the server writes faster, it would follow the server rate. If the server rate is greater that 100ms, error 56 would be fired, and I would handle it.
      What happens is that the server writes the 82 bytes every 10ms. But the client loop is always getting data and runs at 1ms ; which means that the timeout is not respected ! I disabled Nagel algorithm on the server side to be sure that the message is sent when requested, but it didn't help. The client acts like there is always data in the read buffer. Even if it can be right for the first iterations, I would expect that running at 1ms rate, the client would empty the buffer rapidly, but it seems that it never ends... Moreover, the longer the server writes, the longer it takes for the client to see empty buffer (timeout reached again and error 56) when the server is stopped (but connection not closed).
      Does somebody already ran into this issue ?
      Any idea on how I can solve this ?
       
      Server code is attached to the post. 2 TCP connections are established between the server and client (so same IP address, but different ports), but only one is used (upper loop). The other opens and close immediately because EnableStream boolean is always false.

    • By dterry
      Hello again LAVAG,
      I'm currently feeling the pain of propagating changes to multiple, slightly different configuration files, and am searching for a way to make things a bit more palatable.
      To give some background, my application is configuration driven in that it exists to control a machine which has many subsystems, each of which can be configured in different ways to produce different results.  Some of these subsystems include: DAQ, Actuator Control, Safety Limit Monitoring, CAN communication, and Calculation/Calibration.  The current configuration scheme is that I have one main configuration file, and several sub-system configuration files.  The main file is essentially an array of classes flattened to binary, while the sub-system files are human readable (INI) files that can be loaded/saved from the main file editor UI.  It is important to note that this scheme is not dynamic; or to put it another way, the main file does not update automatically from the sub-files, so any changes to sub-files must be manually reloaded in the main file editor UI.
      The problem in this comes from the fact that we periodically update calibration values in one sub-config file, and we maintain safety limits for each DUT (device under test) in another sub-file.  This means that we have many configurations, all of which must me updated when a calibration changes.
      I am currently brainstorming ways to ease this burden, while making sure that the latest calibration values get propagated to each configuration, and was hoping that someone on LAVAG had experience with this type of calibration management.  My current idea has several steps:
      Rework the main configuration file to be human readable. Store file paths to sub-files in the main file instead of storing the sub-file data.  Load the sub-file data when the main file is loaded. Develop a set of default sub-files which contain basic configurations and calibration data.   Set up the main file loading routine to pull from the default sub-files unless a unique sub-file is not specified. Store only the parameters that differ from the default values in the unique subfile. Load the default values first, then overwrite only the unique values.  This would work similarly to the way that LabVIEW.ini works.  If you do not specify a key, LabVIEW uses its internal default.  This has two advantages: Allows calibration and other base configuration changes to easily propagate through to other configs. Allows the user to quickly identify configuration differences. Steps 4 and 5 are really the crux of making life easier, since they allow global changes to all configurations.  One thing to note here is that these configurations are stored in an SVN repository to allow versioning and recovery if something breaks.
      So my questions to LAVAG are:
      Has anyone ever encountered a need to propagate configuration changes like this?   How did you handle it?   Does the proposal above seem feasible?   What gotchas have I missed that will make my life miserable in the future? Thanks in advance everyone!
      Drew
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.