Jump to content

All Activity

This stream auto-updates     

  1. Past hour
  2. I'm intellectually intrigued by the project, but I hesitate to help since the tool you're building would allow someone to create a new EXE that looks like an EXE that might come from some reputable source but has had various key components replaced. That is, of course, something someone could do today (in LV or any other programming language), given enough effort and time. But it takes effort and time, and I don't think I should help short circuit that, given who I work for. I am interested in your use case. I take it you have some EXE that you don't have the source code for but you need/want to make changes?
  3. This likely won't get you very far. Yes you have the files that are in the EXE extracted which is useful. And in theory you should be able to add all those files to a project and rebuild it. But if you are trying to do this so you can rebuild in a newer version of LabVIEW, or so you can edit some part of the EXE, then you aren't going to be able to. When a set of VIs are turned into a binary, they are compiled for that that target and runtime version. Then in almost all situations the block diagrams are removed, and in many cases the front panels are removed. What is in the EXE is then still VI files but most are just the compiled component with no source, and no way to edit them. If when you built the EXE debugging was enabled then block diagrams and front panels will still be included, and then extracting the files will mean you can get the VI source out, and then it can be recompiled or edited like any normal VI. So the files you extracted could probably be added to a project, and then rebuilt, but you won't be able to edit anything in any of the VIs. I guess you might be able to replace one of the VIs with a new one from source if you recreate all the functionality of it, and have the same connector pane and name but I've never tried that. Still anything you discover is good information and the community welcomes any thing you are able to figure out.
  4. Today
  5. Thanks for the background info, that's good to know. The ZIP format is chunked with recurring headers, so changing one might've not been enough. That's probably why the whole ZIP is xor'ed. This is actually quite poor design - the xor goes single-byte at a time, with key depending on previous results. 8-bit ops are slow on newer CPUs (compared to 32 or 64), plus it makes it impossible to use multiple threads. This makes the decryption unnecessarily slow. Anyone beyond student grade would now consider dividing the archive into blocks, and decrypting these blocks separately, on different CPU cores. But I guess back when the algorithm was created, that might not have been so obvious. For the VIs having some blocks removed from final EXE - it actually isn't possible to secure environment such as LabView completely. Just a matter of someone having enough free time. Though the amount of time required might be really considerable here.. (not that this is wrong strategy on NIs side - after all, all the modern security algorithms are based on long time required to bypass)
  6. Of course it is. They changed the PK0x030x04 identifier that is in the first four bytes of a ZIP stream, since when they did it with the original identifier, there was a loud scream through the community that it was very easy to steal the IP contained in a LabVIEW execuable. And yes it was easy as most ZIP unarchivers have a habit of scanning a file for this PK header, no matter where it is in a file and if they do and the local directory structure following it makes sense they will simply open the embedded ZIP archive. This is because many generators for self extracting archives simply tacked an executable stub in front of a ZIP archive to make it work as an executable. The screaming about stealing IP was IMHO totally out of proportions, the VIs in an executable have no diagram, no icon and usually not even a front panel (unless they are set to show their frontpanel at some point). But NI listened and simply changed the local directory header for the embedded ZIP stream and all was well 😆. The ZIP functions available in LabVIEW are a byproduct of integrating the minizip and zlib sources into LabVIEW for the purpose of compressing binary data structures inside of VIs to make the VIs smaller and of using a ZIP archive in executables rather than the old <=8.0 LLB format used. The need to change away from the embedded LLB was mainly because with the introduction of classes and lvlibs, the VI names alone where not always unique and therefore couldn't be stored in the single level LLB anymore. They needed a hierarchical archive format and rather than extending the LLB format to support subdirectories, it was much easier to use the ZIP archive format and the ZLIB provided sources came with a liberal enough license to do that.
  7. Yesterday
  8. I have a built LabView 14 project which I want to get back to a form which would allow me to "build" it again. My programming skills are high, but LabView skill is almost non-existent. I found LLB file within the Windows Resources of PE Executable. I noticed the LLB file contains one, large "block" of data inside, called 'LVzp', which is encrypted.I wrote proper xor-based decryption algorithm. This resulted in a Zip file. I extracted the ZIP, and found many folders and files inside. Some folders have names which indicate they might've been libraries in original project, but were all extracted and put into directory structure instead. For example, I see a folder "vi.lib", and inside there's a folder "dlg_ctls.llb". Now to my questions: How should I prepare all that for re-building? Should I re-create all the LLB files from single VI files I see in extracted folders? Should I also create LIB files before adding everything to a new project? I understand that "New -> Library" in project view creates LLB file, how do I create LIBs? Or maybe adding everything to a project as-is will work as well? Are any specific actions needed to re-create a project out of these files? I figured all the "Remove ..." options in "Additional Exclusions" tab of build target need to be unchecked, anything more?
  9. OK. But good or bad wasn't the question. I was after the definition of "Accidental Complexity" and what you've just said brings me back to what I said originally Here I am saying that the underlying complexity of the framework is a necessary evil that has been "accepted and considered" rather than "accidental". What you seem to be confirming from my interpretation of your suggestion is that any hidden complexity is "accidental" in the context of the meaning and therfore a Framework is accidental complexity. Anyway. I've pretty much come to the conclusion that it's just more of a woolly buzz phrase like "Synergy" and "The Cloud". It obviously means different things to different people and I've a sneaking suspicion that it's meaning depends on where blame will be apportioned
  10. OO is contradictory to functional as practiced by C#/JAVA/C++. Those languages insist on by classes by pointer or reference (C++ can do by value, but doesn't use it commonly). OO is compatible to functional when it behaves by value, as it does in LabVIEW. But many functional languages consider OO to be a half step toward more functional features: dynamic dispatching is subsumed by pattern matching, for example.
  11. No. The AF remains as it is in NXG. This would be something new.
  12. Exactly, but according to your definition it would be "accidental complexity". No, my original claim (that Daklu reacted against) was that a good framework reduces complexity. Working without a tried and tested...something (could just be a standard way of working that you are very expert in, but a "Framework" adds a library that supports that standard) leads to code that has extra complexity in it. Often that extra complexity is not obvious, but that's the worst kind of complexity. A particular bugbear of mine is NI templates like "QMH" and "Continuous Measurement and Logging" template/example that come with LabVIEW, which you might think are simple. I consider them very over-complicated. I've given more than one talk pointing out weaknesses in CM&L.
  13. A lot of time passed. Many new tools are available. Possibilities have evolved. What if I told you that it is still the same ZIP format? The LabView Runtime Library must be extracting this somehow, right? So it shouldn't be hard to find, if you know the chunk ID is 'LVzp' (possibly backwards due to endianness) and there's zip extractor around (unzip from zlib, version from circa 2012). Find my Github projects for details.
  14. Exactly, but according to your definition it would be "accidental complexity". This is why I said in an earlier post that people confuse architecture and frameworks. I personally use a SOA architecture but within each service I may use QMH or Actors or whatever framework best achieves the goal. Many people choose one framework and fit everything inside it, making it their architecture. And lets be frank (or brian). Most LV frameworks are written and intended to be just that. So LVOOP is "accidental complexity"? (Just teasing) I don't really think it is a thing by these definitions when talking about complexity. Rube goldberg code exists but it isn't really "accidental". It is the product of brute forcing a linear thought process rather than iteratively designing one. Neither case is "accidental". Bolt-on bug fixes to cure the symptom rather than the cause might be argued as "accidental complexity" but that is just bad practice (of which I'm sure we are all guilty at some point). From the feeback it seems more of a weazel phrase for inelegant/inefficient code (except AQs take on it) in order to not admit it as such. I suspect this phrase is only used when things go wrong on a project and probably has an unquantifiable quality about it..
  15. 1. What application architecture do you use for higher-level organisation? I'm going to be pedantic and change my answer, since the options listed are frameworks, not architectures. 😋 The architecture I most frequently use is the "hierarchical actor architecture." If one were to ask for a little more detail, I'd say "hierarchical actors with asynchronous name-data messages."
  16. What is a "framework" but a tried and tested template for design combined with a support library and hopefully some productivity tools and documentation?
  17. The things I've read always refer to it as "accidental" complexity, but you can think of it as "unnecessary", "incidental", or simply "non-required" complexity. It's just complexity in a system that isn't a necessary to meet the requirement (both functional and non-functional) of the system. System complexity is a subjective evaluation, so these are more abstract ideas than concrete rules to follow.
  18. Nah. Don't buy it. This is a change in requirements and there is no added complexity of the space itself. This is still a change in requirements and this is definitely an excuse for claiming value-added when no intention to add exits! Just because a user infers a feature that was never offered; it doesn't mean that the code is more complex. It just means the User (or Product Owner) has identified a new requirement. We were talking about code complexity growth and the term "accidental complexity" implies some sort of hidden cost - unkown or impossible to know, at design time (from what I can tell). This is why I asked for clarification. I've never heard of it and it just sounds like an excuse. By that definition, wouldn't the framework itself be an "accidental complexity" rather than the "considered and acceptable" complexity of a tried and tested template for design? Maybe I'm just getting too hung up on "accidental" and what it implies.
  19. Allow me to introduce you to implied spaces. When I build a two-story house, I consciously add a staircase between the zeroth and first floors. I add handrails for safety, optimize the height of the steps for average human legs, etc. I spend a lot of time designing the staircase. What I don't spend a lot of time designing is the room under the stairs. I put a door on it and turn it into a closet, a storage place for the people who live in the house. Now, the people who live in the house start using that storage space -- exactly as intended. But after a while, they are complaining that frequently, they need something at the back of the storage space, so they have to take everything out to get it and then put everything back in. You ask me, "Didn't you put other closets in the house?! Why aren't they storing more things in the other closets?" I did add other closets: I wasn't that short-sighted. But it turns out that this staircase closet is taller than any of the others, so it holds things nothing else holds... wasn't intended, just happens to work because it is under a two-story staircase. Also, this is central in the house, so it is closer than the other closets, so the users think that the time needed to pull everything out to get to something at the back isn't *so* bad. The users of the space made it work, but there is accidental complexity in how they have evolved to use it. I didn't do anything wrong in the design, they didn't do anything wrong in giving me their requirements. It just happened with no one at fault. With this new understanding of my users, I refactor the house and add a second door on the short end of the stairs so people can pull from either end. Suddenly the under-the-stairs closet is not an implied space but an intended space. It doesn't matter how much you refine a design, there are always places that are implied within the design that are not spec'd out. It's a macroscopic aspect of Godel's Incompleteness Theorem. Some things aren't designed; they just work the way they work because they're near the things that are designed. And when users start relying upon that implied functionality, that is accidental complexity. Inspiration for this post from the novel Implied Spaces by Walter Jon Williams and Whit by Ian Banks, two science fiction novels that happened to give me good advice on software design. Accidentally... I think.
  20. I'm not sure how to answer this question. To me, it's equivalent of asking whether LabVIEW will let you add your own definition of VIs. NXG has the ability for you to create your own models of computation. To make your question more confusing, these actors aren't AF actors... the AF is a library where actors are constructed out of bits and pieces of language that we have available. This would be its own thing. Regardless, it is years away, and I don't want to hijack Powell's discussion on things that exist today.
  21. Hi Rolf That's good feedback. The current UI \ UX is just a proof of concept. Once we start designing the actual UI \ UX we hope it'll be efficient to use and pleasant to look at. One other note: the indexer is open source. Please feel free to suggest adjustments by joining the dev team on GCentral's git repo. Keep in mind GCentral.org is in its infancy so we're choosing to sacrifice aesthetics for functionality to avoid bikeshedding. Glad to hear you think GCentral is a good initiative and thanks for your thoughts!
  22. How to build a complex function on a 3D surface in labview. f(z)=log(z) and in General labview is able to build complex surfaces (where there is an imaginary part)? if is able what function is better to use from below presented! help
  23. I don't think Functional and OO are contradictory, per se. Rather, it's by-reference objects contained in other objects that is very non Functional. By-value objects and LabVIEW dataflow seems quite Functional to me.
  24. I think Daklu meant when your program is more complex than the problem it is solving. My problem with calling this "accidental" is that it sounds easy to avoid if you're careful, when really it is very hard to avoid lots of extra complexity. An example that comes to mind is shutdown/cleanup code. A good framework will handle cleanup simply and easily (this is an area I think Messenger Library is strong).
  25. What is "accidental complexity"? This sounds like an excuse given to management.
  26. Last week
  27. Yes, I try to do that with "Messenger Library". Unfortunately, I find that if I stress that it is a library of messaging (with optional "actor" templates) that it gets dismissed as not to be considered when comparing "frameworks", but if I present the whole thing as a "framework", then people assume it is very restrictive.
  28. That "accidental complexity" is a killer. It's a big part of that exponential increase in complexity you showed on your graph a couple of posts ago. Helping to get that out of the way so you can address the true complexity of the actual problem is no small thing.
  29. At this point, if you only use NI hardware you are fairly safe. It's either supported with 64 bit drivers or discontinued anyways. If you use other 3rd party drivers the situation is a lot more checkered. Some have already abandoned 32 bit software and only deliver 64 bit anymore. Others have not made the step and many might never as their hardware and software offering is in a sort of maintenance state. "Look it works!" "Hands off and don't touch it anymore! It was hard enough to get it not to crash constantly!" 😆
  1. Load more activity
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.