Jump to content

ShaunR

Members
  • Posts

    4,871
  • Joined

  • Days Won

    296

Posts posted by ShaunR

  1. You are the third person in a row to tell me that.  But sometimes one must try to balance risk with the need to keep work coming in.

     

    A second thought.

     

    Why not just do it as a normal contractor rather than as a fixed price project. This is much safer as there is no deliverable as such. 

  2. Perhaps Jason can confirm the algorithm. But I don't think the graph controls just decimate in the strictest sense. I think they do something like bi-linear filtering since small artifacts are not lost-as would be with pure decimation.

     

    Long term data logging is better served by a DB IMHO. Then it's just a one line query to decimate and you can have a history as big as your disk allows with no memory impact.

  3. You are the third person in a row to tell me that.  But sometimes one must try to balance risk with the need to keep work coming in.

     

    The issue is this. The risk is very high that they will try and renege of the agreement-they have all but admitted that. Even if you do put a watertight clause into the contract, it is highly likely they will contest it anyway. So you will need deep pockets to defend it in court to get the money you are owed.

     

    If it is a large corporation, they have departments dedicated to finding holes in contracts and arguing the toss of every penny. They will use it to get more concessions out of you by - nit-picking at best, by threatening at worst. The sort of corporation you want to do business with are those that only send contracts to their law dept as a last resort, not first resort. Which do you think they are?

     

    Your first defence should you choose to work with them, is of course, the clause in the contract (choose any from the open source contracts, they all disclaim liability). This is really a management bargaining tool, however, so you can point to it and say "that's not what we agreed". If it goes further than that, then you incur huge expense so you really need a company that is prepared to take that risk in the first place and not go further. Your last defence is Limited Liability Insurance  or Professional Indemnity Insurance to stop them taking your house, car and dog if they win.

     

    That' the risk of all consultancy work. It's just better for your health, wallet and integrity to politely decline any companies that have a history of serial disputes with consultants, Get a good lawyer

     

    DISCLAIMER:

    Not legal advice, not a lawyer, not even particularly good programmer-make of it what you will

  4. I am in the process of formalizing a contract with a company that has been known to have had prior issues with LabVIEW developers and has adopted a tough talking somewhat litigious tone.  I am considering what wording I should put into a Software Development Contract that will protect me from any liability in the event a bug in the software should damage the clients property.  Has anyone had experience with this?  Or perhaps could suggest a contract template to start off with.

     

    I suggest a you avoid them like the plague.

  5. I guess I'm a little puzzled why some people see this and some people don't. What is different?

    I use strict typedefs quite frequently. Quite often I place instances in class private data and constants fromt the typedefs in class methods. Note that the .ctl files for the typedefs are in separate .lvlib files.

    When I need to change a typedef I open it within the project, make modifications, apply the changes, and save. I avoid things like renaming an item and inserting another item in the same step, which could obviously cause problems. As long as I am careful in this respect, I don't (or haven't yet) encountered major issues. Maybe I've just been lucky. I'm puzzled what the difference is.

    I too have never experienced it. But I remember Daklu writing an example (on lavag) to prove me wrong and that it does really happen.lol Maybe the Lavag historian can find it-I cannot currently

    I vaguely remember that the reason that I never see it, is because my workflow is to have all the VIs in memory.

    IIRC, when you change a typedef, the VIs not in memory are not updated,(of course) and so the change is not propagated., When they are next loaded labview isn't able to resolve the discrepancy and resets them to the zero value. LabVIEW is able to resolve the difference if the new value is at the end, but not in the middle. So I think I suggested having the old-style VI Tree just so that you could force LabVIEW to load all the VIs.

     

    Since my workflow is to produce examples that act as regression test harnesses, the examples keep all the VI hierarchy in memory so I never see it.

  6. You haven't told the Configure Serial Port which resource to use. Right click on the top left corner terminal and create a constant. Then select your serial port from the list.

    Put the serial port initialise outside the loop (and don't forget to close it when the loop stops)

     

    You should get it working with the Serial port examples first. Then you will see how to use the VIs correctly.

  7. Separated compiled code has been working well for our team over the last 3 years.  There are two instances where we encountered issues:

     

    • Packed Project Library build corrupting the VI Object Cache
      • The VI Object cache had to be cleared after PPL builds

      [*]TestStand with the LabVIEW Run-Time adapter requires uni-file code if distributed as source

     

    -Brian

     

    Just to expand on lvbs points.

     

    If you build source plugins. i.e. plugins with diagrams so they can be recompiled on the target system when invoked from an executable. Then you must turn off compiled code separation for that VI otherwise you will get the Error 59: “The source file’s compiled code has been separated, and this version of LabVIEW does not have access to separated compiled code.

     

    And just to reiterate a point that some people often forget. The global compiled code option only applies for "New Files" so checking it won't change all the existing files to use this method. You have to go and change each VIs setting individually and recompile.

     

    DISCLAIMER:

    The above are not "issues" as such. Just additional things that you have to bear in mind when using the option and may fit into the the grannys' eggs category. 

    • Like 1
  8.  

    After just over a year, someone has claimed the first prize. The original proposer for the competition is in a bit of a quandary :).

    When the competition was set 5 btc was worth about $75. Now they are worth about $3000 :D The deadline was set for when 5 working solutions were submitted, however, there has been only one and the OP has now closed the contest.

     

    Kudos to the guy who had to write native labview code to handle big numbers and eliptic curve multiplication, which is no mean feat..

  9. Same as the original encoding one.  Why wouldn’t it be?

    Exactly.

     

    If we are going to put type information in the JSON stream purely for readability and type checking, shouldn't we also put type information about EXT, DB, SGL, etc.?

     

    This just looks to me like a solution looking for a problem.

  10. Sticking with the JSON spec, it might be better to encode LabVIEW refnums as JSON Objects:
     
    {
      "DAQmx Refnum": {ref:”SampleTask”, LVtype:”UserDefinedRefnumTag"},
      "File Refnum": {ref:-16777216, LVtype:”ByteStream"},
    }
     
    On converting back, we should throw and error on any type mismatch.
     
    This is rather verbose, but encoding refnums should be a rare use case, limited to “tokens” that are passed back to the original sending application where the refunms are valid.

     

    You know that "refnum" is short for "reference number", right?  

    A file refnum is just a number. In fact. It is the file reference returned by the OS. Same with, for example, TCPIP. It just strikes me that this is a bit like adding the TypeDef information into the JSON stream  (e.g Control. TypeDef, StrictTypeDef) which is interesting, but not useful.

     

     

    Let's assume that having specific LabVIEW reference types defined as objects is desirable. What does the LVType:ByteStream give you that the cluster wired as the "template" doesn't? How will you be able to tell that a DAQ Task ID is a UserDefinedRefnumTag  rather than a physical channel? What will the cluster template for the variant decoding look like?

     

    DAQ Task IDs are not refnums. In fact,they are the same for all IO (like VISA) - a typed string like "path". If you plonk a DAQ Property Node onto a diagram you can quite happily connect a string to the "ref" input  (which stands for reference, not refnum). You'll see a conversion dot as it gets converted to the Task ID type-same goes for VISA references.

     

    Are these "pre-thoughts" to object serialisation? Here, There Be Monsters

  11.  I decided to convert the refnums to either I32 or string depending on thier type and then include the class of the refnum proceeded by a "=>".  Including the class is valuable from my viewpoint in that it makes the JSON more readable.  When converting from a JSON sting back to a variant I use the "Variant (Data Type)" input to define the class and ignote the class name in JSON string.

     

    Refnums are numerical types in all languages (even labVIEW). However, Whilst it may be "convenient",  I have reservations about injecting labview specific formatting into a language agnostic serialisation.What if a string contains "=>" ?

  12. I think this is a weak argument. I mean I agree that it isn't very flexible, but how often do you start a comment with # and then no space? That is the only time a bookmark is made. If you have the comment "Here we take our # of samples and average them" it won't make a bookmark. You need to deliberately make a comment starting with # then no space.

     

     

    Yeah this one I think is a big over sight by NI. I mean they have a product, which semi-relies on free labels to be in a specific format to be able to pull requirements. When making the bookmark manager it should have some how incorporated that standard. That being said I know you can modify the requirements gateway reading functions to look for #tag instead of [tag] just like I think it uses [covers] could be #covers.

     

    I just think that what ever you came up with (which looks great and probably is more flexible) probably isn't as fast as NI's implementation, because they were able to put whatever hooks into the VI to make finding bookmarks faster. Wouldn't it be nice if your code used the bookmark manager searching tools instead of your own, assuming NI's is faster for larger projects.

     

    EDIT: By the way awesome discovery. Bookmarks are saved in the VI in a human readable format. They can be found by going to the BKMK block of the VI file, this can make finding bookmarks much faster since you don't need to even open a VI reference to the file, you simply need to open it and read the bytes of the file.

     

    I paste code snippets from C, ini files and SQL on the diagram so, # isn't a good choice for me. Not saying you can't. Just saying it's an arbitrary choice what to use as an identifier and this particular one has drawbacks (and you can't change it).

     

    Using the built in system won't get me any benefits. In fact it will limit it's use to LV2013+ (I use 2009 by choice). Besides, PP doesn't just do tags. You can also see if VIs have default icons, are broken,descriptions and history filled out, FP hints, descriptions etc. Tags are just a small part. and everything else requires opening the VI anyway.

  13. Because NI has support for custom Bookmark Managers, I'm guessing that Shaun could make his own that merges the functionality of the two.

    Hashes are not a good choice as people use it as a shortcut for "number" or "hash" and it adds noise to the listings. The "tags" are definable in the PP, so you can use what you like but they have to be enclosed (tag), [tag], (my favourite) ~tag~, #tag# etc. But you could have #tags (methinks there are too many twitterers - or is that twits - at NI)

    Additionally you can assign different tags, different meanings. For example I use [tag] as "Requirements" (ala the NI requirements thingy) for calculation requirements coverage.

    I really must get around to productionising it. It's the "help" document that's putting me off as it will be huge due to the features, which are easy to use, but take a lot of words to explain-like the plug-in system and custom queries.

  14. Hi SDietrich

     

    It seems the libpq.dll has a load of other dependencies. When loading, it couldn't find libintl.dll. Looking at the website, the binary download is 44 MB so there are a few more than just libintl.dll. I downloaded the zip and there were a few wxwidgets DLLs that you probably don't need, but there were the libeay, zip and a few other binaries (including libintl.dll). For an out-of-the-box experience, some of those will need to be bundled too.

  15. Hi all,

    Any thoughts on the simplest way to trigger a LabVIEW event using an external script in Windows? I want to be able to poke my LabVIEW application and kick off a predefined code block. No need to pass any information/message.

     

    I'm looking for something very passive, unlike, for example, a listening TCP socket.

     

    Thanks!

    Simplest way is to use vbscript to press buttons on the FP.

    • Like 2
  16. Good suggestions both of you. I have a whole set of internal libraries I'm using for 2D picture elements in this application, so I'd rather not make the switch to .NET or 3D just yet. That said this may be the last time I use the native 2D picture routines for any serious work.

     

    I threw this together this morning as a proof:

    attachicon.gifWu.png

     

    The performance is as expected, hideously slow when it comes time to flatten the result (do not try to render the raw op-code string). As a result it's a literal interpretation of the Wu algorithm as described at wikipedia since all the optimization in the world won't change how slow it is to render op-code based dots in LabVIEW.

     

    This is just a proof of concept, at this stage I'm happy enough to say it can be done and defer the implementation until later. Still needs color, line width, and ditching of the op-codes in favor of a raster buffer which should improve things by at least an order of magnitude.

     

    Example code is LV2013, public domain.

    Maybe create it as a plugin filter for Bitman. I always found Vugies Bitman is far superior in rendering performance for anything I do in 2D.

  17. Hi SDietrich.

     

    The DLL as of version 9.3.1 is included in the package

     

     

    The DLL isn't in the package so I couldn't check, but I looked at the CFLNs and they are set to re-entrant. So did you compile with "--enable-thread-safety"? (otherwise it will have to be run in the UI thread).

     

    A note in the description of exactly what platforms/labview versions you are supporting would also be useful (Windows, Linux, Mac, VxWorks etc) For Windows users it would also be of benefit if you say whether you are supplying just the 32 bit or both 32 bit and 64 bit DLL (LabVIEW x64 cannot load 32 bit binaries and vice versa).

     

    jgcode is the Lavag Tools Network admin. Send him a PM and he can advise on how to get it on the tools network under the Lavag banner.

  18. I'm looking to do some line drawing in the picture controls which frankly have to be anti-aliased. Really basic stuff that can be entirely software based. Does anyone know if the routines LabVIEW uses to draw lines in the graph/chart controls are exposed and can be re-used in any way?

     

    I'm not against implementing something like the Wu algorithm myself, but clearly LabVIEW has already implemented something...

     

    The scene object has anti-aliasing which is used in the 3D Graph Controls. Create a scene object and select "Specials>Anti-aliasing"

  19. After doing some testing I found that JSON to Variant can't handle empty paths.  The problem is caused by the OpenG Scan Variant From String throwing error #1 because the string it is scanning is empty.  A simple fix would be to add an additional case structure to handle paths within JSON to Variant.  

     

    I think the OpenG Scan Variant From String is obsolete since JSON to Variant can handle all types now (apart from refnum and path-which are easily added). We should consider removing the OpenG Scan Variant From String completely. I would also suggest that for the unknown data type we output a string type and raise a warning  rather than an error..

  20. Hi.

    I have a small program that help me move som specific files to specific folders. 

    The filesize vary from 400MB to arround 10GB, they are transfered from a local disk to a netværk disk (mounted as z: drive)

     

    Currently I am using the MoveFileA function from the kernel32.dll, and it works fine. It requires two arguments, path from and path to, easy peasy. 

     

    My problem/request is that I would like to have som sort of progress bar when moving large files. So I would like to monitor the size of the copied data. 

    For this I think I should use the function MoveFileWithProgressA also from the kernel32.dll.

     

    from microsoft: http://msdn.microsoft.com/en-us/library/windows/desktop/aa365242(v=vs.85).aspx

    I get:

    BOOL WINAPI MoveFileWithProgress(  _In_	  LPCTSTR lpExistingFileName,  _In_opt_  LPCTSTR lpNewFileName,  _In_opt_  LPPROGRESS_ROUTINE lpProgressRoutine,  _In_opt_  LPVOID lpData,  _In_	  DWORD dwFlags);

    where we again have  path from and to, 

    then (arg3+4) I should give a pointer to a CopyProgressRoutine callback function - and some arguments for this function.

     

    My problem is, that I dont know what that means, and further more, I dont know how to get a file size number or something from this.

     

    I hope my problem/request is clear, I would like to have som help in understanding the CopyProgressRoutine callback function, and how to use it.

     

    (Windows 7 64bit)

     

    Regards

    Jørgen Houmøller

     

    Labview has no way to create callbacks that can be called from external code (the exception being some .NET functions). You need to create a dll wrapper that supplies the callback function and proxy it via a LV prototype (e.g. an event using PostLVUserEvent) which can then be used to get the callback data.

     

    If there is an equivalent .NET function, you maybe able to use the callback primitive in the .NET pallet to interface to it. 

    • Like 1
  21. This is what I ended up doing. I spent days trying to figure out which dependencies were required, and gave up in the end after trying many different "sensible" permutations.

     

    I have gained a whole heap of Inno Setup knowldege and random installer stuff now though, so I suppose it is time well spent.

     

    If you are as impressed with Inno Setup as I am (couldn't live without it). you might consider a donation.

    • Like 1
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.