Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,837
  • Joined

  • Last visited

  • Days Won

    259

Everything posted by Rolf Kalbermatter

  1. In the scenario of this thread it's not an option. But for a C DLL to be called by a LabVIEW program it is for anyone useful who gets to use my DLL in LabVIEW, ideally with an accompanying LabVIEW VI library! Also the Pointer variant while indeed a possible option is in LabVIEW in fact seldom significantly faster and quite often slower. If there is any chance for the caller to know beforehand the size of the buffer (maybe by calling a second API or by defining anyhow what data needs to be returned: data acquisition for instance) the use of caller allocated buffers passed as C pointers into the function is at least as fast or faster, since the DLL can directly copy the data into the buffer. With the DLL allocated buffer you end up in most cases with a double data copy once for the DLL when it allocates the buffer and copies its data into it and once with the MoveBlock() in the caller. So claiming that it is always faster is not correct. At least inside LabVIEW it is usually about the same speed, only with the data copy happening in one case inside the DLL and in the other in the caller. Only when the DLL can only determine the buffer size during the actual retrieval itself can it be an advantage to use DLL allocated buffers as it avoids the problem of having to potentially allocate a hugely over-sized buffer. If the potential user of the DLL is a C program then this is different. In that case returning DLL allocated buffers is indeed usually faster as you do not need the extra MoveBlock()/memcpy() call afterwards. But it's in any case a disadvantage that the API gets complicated to a level that is stretching the knowledge limits of many potential DLL users, and not just for LabVIEW Call Library Node users, as it is non-standard and also creates easy to introduce bugs in respect to resource management, because of unclear situations who needs to deallocate the buffers eventually. The returned pointer could also be a statically allocated buffer inside the DLL(often the case for strings) that would be fatal to try to free(). And another issue is that your library absolutely needs to provide an according dispose() method, as the free() function the caller might be linking too, might operate on a different heap than the malloc() function the DLL used. The only real speed improvement is when the data producing entity directly can create the managed buffers the final caller will eventually use. But C pointers don't count as such in LabVIEW since you have to do the MoveBlock() trick eventually. One more comment in respect to the calling convention. If you ever intend to create the shared library also for non-Windows platforms, C calling convention is really the only option. If you start out with stdcall now and eventually decide to create a Linux or MacOS version of your shared library you would have to either distribute different VI libraries for Windows and non-Windows platforms, or bite the bullet and change the entire VI library to C calling convention for all users, likely introducing lots of crash problems for users who find it normal to grab a DLL copy from somewhere and copy it into the system or application directory to "fix" all kind of real and imagined problems. At least there are tons of questionable links in the top google hits about DLL downloads to fix and improve the system, whenever I google for a particular DLL name. That and so called system performance scanners who offer to scan my system for DLL problems! Never tried them but I would suspect 99% of them doing nothing really useful, either containing viruses and troyans or trying to scare the user into downloading the "improved" program that can also fix the many "found" issues, of course for an obolus in the form of hard valuta.
  2. Actually using fully managed mode is even faster as it will often avoid the additional memory copy involved with the MoveBlock() call.But at least in C and C++ it's only an option if you can control both the caller and callee, or in the case of a LabVIEW caller, know exactly how to use the LabVIEW memory manager functions in the shared library.
  3. Unless your string can have embedded NULL bytes that should not terminate it, there should be no need to pass string parameters as byte arrays. In fact when you configure a CLN parameter to be a C string pointer LabVIEW will on return explicitedly scan the string for a NULL byte (unless you configured it to be constant) and terminate it there. This is usually highly desirable for true strings. If the buffer is however binary data that can have 0 bytes, you should indeed pass it as byte array pointer to avoid having LabVIEW scan it on return for a NULL character.
  4. No you can't avoid having to preallocate the strings and arrays by the caller if you want to make everything like you imagine. There is no way the LabVIEW DLL can allocate C string or array pointers and return them to the caller, without limiting the caller to only use very specific deallocator functions provided by your library too, or to never link with a different C runtime library than the one you used (that doesn't just mean a specific type of compiler but even a spedific C runtime version, even down to the last version digit when using side by side (SxS) libraries, which all Visual C versions since 2005 do). This is the old problem of managed code versus unmanaged code. C is normally completely unmanaged! There exists no universally accepted convention for C that would allow to allocate memory in one place and deallocate it in another place without exactly knowing how it was allocated. This requires full control of the place where it gets allocated as well as where it gets deallocated. And if that is not both in the caller, that is seldom the case and usually also perverts the idea of libraries almost completely. The only way to not have the caller to preallocate the arrays (and strings) is to have a very strict contract (basically this is one main aspect of what managed code means) in both the caller and callee about how memory is allocated and deallocated. This happens for instance in DLLs that are specifically written to handle LabVIEW native datatypes, so LabVIEW as a caller does not have to preallocate buffers to unknown sizes and the DLL can then allocate and/or resize them as needed and pass them back to LabVIEW. In this case the contract is that any variable sized buffer is allocated and deallocated exclusively by LabVIEW memory manager functions. This works as long as you make sure there is only one LabVIEW kernel mapped into the process that does this. I'm not entirely sure how they solved that, but there must be a lot of trickery when loading a LabVIEW DLL created in one version of LabVIEW into a different version of LabVIEW to make sure buffers are allocated by the same memory manager when using native datatypes. But enforcing to use LabVIEW manager functions to your C users so you can pass LabVIEW native datatypes as parameter is not an option either, since there is no officially sanctioned way to call into the runtime system used by the LabVIEW DLL from a non LabVIEW process. Also your C programmers would likely spew poison and worse if you tell them they have to call this and this function exactly in such a way to prepare and later deallocate the buffers needed, using some obscure (to them) memory manager API. This is not even so much bad intention by NI and the LabVIEW developers but simply how programming works. The only universally safe way of calling functions with buffers is to both allocate and deallocate them in the caller. Anything else requires a very strict regime about memory manager calls to use, that can work if designed in the programming framework from scratch (C#) for instance, but C and C++ existed long before there was any programming framework that would care about such things, and many programmers have attempted to add something to C and C++ like that later, but each came up with a different interface and each of them always will remain an isolated solution not accepted by the huge majority of other C and C++ users. Basically if you want to go the path you described you will have to bite the sour apple and use C pointers for arrays and strings, and require the caller to preallocate those buffers properly.
  5. You would like to receive absolution to use the XNodes, despite all the well known comments out there. The only people who can really do that, most likely won't as they are not allowed to do that and we can't other than the few who tried it. Working on them seems a rather crash intense affair, using them seems a bit more safe, but the mileage may vary greatly depending on LabVIEW version, OS and what else, including the position of the moon. What I can safely say is, that there is absolutely no guarantee, that XNodes will not work better or worse in future versions of LabVIEW. They may be improved, left to code rot that will cause more crashes in newer versions, or eventually discontinued entirely and removed from future LabVIEW releases. As such I would consider it a totally irresponsible decision to use them for anything but private experiments.
  6. So I've been fighting a bit over the weekend with this and came across a multitude of issues. The first one is that most ZIP utilities at least on Windows, seem to use the OEM codepage to store ASCI information in the ZIP archive, where as LabVIEW as a true GUI application uses of course the default (ASCI codepage). Both are set depending on the language setting in the International Settings control panel but are usually totally different codepages, with similar character glyphs but typically at entirely different code positions. In addition ZIP files have a feature to store the file name as UTF8 string in the archive directory. So far so good. Implementing the correct translation from the LabVIEW ASCI codepage to the OEM codepage and back is fairly trivial on Windows, a bit more complicated on MacOSX and only with limited accuracy since the Mac uses traditionally somewhat different character translation tables than Windows. On Linux it is a complete impossibility without linking to external libraries like iconv, which might or might not be available on particular Linux distributions! So I'm a bit in a limbo here how to go about this, because adding an entire codepage translater into LVZIP for non Windows targets seems like a rather bad overkill. While investigating this I also found another issue entirely independent of LVZIP. Suppose you have a file on your disk with a filename that contains characters not present in the current ASCI codepage of your Windows system! There seems absolutely no way to access this file from within LabVIEW since the LabVIEW path uses internally MultibyteCharacters based on the current ASCI codepage, and if a filename contains characters not present in the current ASCI codepage the LabVIEW path will not be able to represent this filename at all. In case you wonder why such filenames could even exist: unless you use an old FAT file format on your Windows system the filenames are really stored in UTF-16 in the filesystem and Windows Explorer is fully Unicode compliant, so those files happily can exist on the disk and get displayed by Explorer, but not accessed by LabVIEW. And in case you wonder if this is an issue in non Windows systems: On Linux definitely not nowadays since all modern Linux systems use UTF-8 as encoding and it seems LabVIEW also uses whatever is the default Multibyte encoding on the OS, which would be UTF-8 in those cases. For MacOSX I'm not entirely sure since there are about umtien different possible APIs to access the filesystem, depending if you go Carbon, Cocoa, Posix or any mix of it, each of them has its own particular limits and specialties. I really wish they would have made the Path format use UTF-16 internally on Windows long ago and avoid such problems altogether, possibly translating the path to a multibyte encoding when needing to flatten a path in order to keep the Flattened format consistent. But at least all existing filepaths on the disk would be valid then within an application. As it is now, the flattened path isn't really standardized in any way anyhow, as it is flattened to whatever local multibyte setting the OS is configured for, on Windows that's one of the local codepages while on Linux and possibly Mac that's UTF-8 nowadays. So passing a Path through VI server between different LabVIEW installations will run into problems already between different platforms and even between Windows versions using different country locales. Making it all consistently UTF-8 in a flattened format would not really make this worse but rather improve the situation, with one single drawback: Flattened paths on Windows systems stored in older versions of LabVIEW would not automatically be compatible with LabVIEW versions using UTF-8 for flattened paths. Basically I would like to know two things here: 1) First what is the feeling about support for translation of the filename strings on non-Windows systems? Is that important and how much effort is it worth? Consider that support for such translation on embedded targets like VxWorks would be absolutely only possible with the addition of a codepage translater to LVZIP. 2) Has anyone run into trying to access filenames containing characters that the current Windows multibyte table did not support and if so what solution did you choose?
  7. Primitives are not stored as an entity on disk, but are directly created by code residing in LabVIEW.exe. The LabVIEW menu palettes then hold links to those pirimitives. Creating primitives not made availabe through the menu palettes is a function of the Create Object primitive that is part of the scripting palette extension. This node has a ring input that contains all the primitives and controls, the LabVIEW executable contains internally.
  8. I don't really have an idea how to do it better in LabVIEW, but this use case is specifically, what Generics are for in Java and .Net and I suppose templates in C++, although the template mechanism seems so involved to me that I never tried to understand it nor have used it. The one limitation at least in Java is, that generics only work with object datatypes, not on the primitive datatypes. That is sometimes rather inconvenient but Java also has object types for its primitive datatypes so it's possible to get around that, yet using object types everywhere for primitive datatypes including in arrays and such can have a significant memory impact.
  9. The formula node contains a simple text style parser with a subset of the C syntax. The resulting code is compiled into the VI but it has not optimizations at all. After all the LabVIEW developers did not intend to include the entire LabWindows CVI compiler (most likely they didn't even loan code from the CVI compiler for this). NI never said the formula node would be highly optimized but always maintained that it is for the text inclined to be able to define mathematical calculations without the need to translate everything into LabVIEW nodes, but performance of the calculation will be likely always somewhat slower than the equivalent code done with LabVIEW nodes. LabVIEW definitely has not optimized over any structure boundaries in the past and even less over VI boundaries. While LabVIEW recently seems to have started to also do some optimizations over structure boundaries, I would bet that the formula node is not a candidate for this until they change it to compile to the new intermediate DFIR graph that gets then fed into the LLVM compiler for actual compilation of the target code. And while such a change would be technically very nice, it is probably low on the list of things todo, because it is a considerable task, but gives little bang for the buck in terms of marketing it as a feature. Another tangent to this is also that if someone insists on doing complicated formulas in text and is concerned about squeezing the last grain of performance out of it, then he very likely is to go to an exernal library anyhow, since that gives him every choice in what language to choose as well as which compiler toolchain, possibly with highly optimized code generator and/or runtime libraries.
  10. I'm afraid such a list will stay a wet dream. Officially those private properties and methods don't exist and NI people are not supposed to comment on them, but they are the only ones who could really make some educated comments. For me it is just an educated guess. While I think the safety of these two properties is fairly well, as far as rusty nails and other painful accidents are concerned, they were probably made secret since the LabVIEW developers did not want to carve the association between a panel and an OS window in stone. Also as we have seen the move to 64 bit has posed a challenge. Changing the existing property to a different datasize is not really an option as that could lead to very hard to debug bugs, when the truncated value is passed around as a 32 bit entity and eventually interpreted as a handle again, which might be accidentally pointing to an entirely different but still valid object. So in a way not having exposed that property they could easily change it without having to go through 200 documentation change requests at the same time.
  11. Our friend flran posted them somewhere else on this board. Aristos himself had revealed them at some point by posting a password protected file on the NI site. Of course it didn't take long until someone peeked past the password. But this implementation is definitely to be considered part of the unfinished attic in LabVIEW, with many rusty nails sticking out everywhere and possibly causing you nasty pains. The IDE doesn't necessarily know. In fact the older Visual Studio IDEs did nothing like that, the only thing they knew about was syntax highlighting. But many IDEs (Eclipse too) have nowadays special syntax check modules, that basically contain the entire syntax parser of the compiler already in order to provide such just in time error indications. It's not that the IDE is doing something that is trivial, it's that it pulls in the entire compiler parser to do this. A void wire logically doesn't yet exist in LabVIEW although the LabVIEW internal typecodes to know a void datatype, which is used for various internal things already. It is not an unknown type but a type carrying no data at all.
  12. Hmm, for some reasons it would seem rather strange to support the retrieval of HWND across machines. The HWND really only has any meaning on the system it was created. And if you use the OS Window you are setting yourself up for big troubles once you move to LabVIEW for Windows 64 bit. I suppose you are doing some remote work where you retrieve the handle and pass it back to another function or whatever on the same remote system. The proper way to do such things would be to have the actual handle twiddling all done on the target machine and expose that VI doing this, over the VI server to your other machine(s).
  13. The issue is rather complicated. I can fairly easily add support for filenames in whatever codepage your Windows system uses as default OEM codepage currently (which is how ZIP file names are supposed to be stored while LabVIEW uses the ACP itself), but there is no simple way to allow support for arbitrarily named files not currently displayable in that codepage. Those files can be correctly seen on modern Windows systems with NTFS filesystem since the filenames get stored as UTF-16 there, but LabVIEW's file functions still are 8 bit codepage based. If you try to open a file in LabVIEW containing characters not currently displayable in the current system codepage, LabVIEW fails fatally since it can not reference such a file at all. So in order to allow LVZIP to compress a directory containing such files into a ZIP file and vice versa, the entire directory enumeration and such would need to be done outside of LabVIEW in the C code in order to allow using the UTF filename feature in ZIP files. But adding an entire ZIP/UNZIP utility to the C code of LVZIP seems a bit like overkill to me. So the question is if it is enough to support foreign characters for the system the file was created with, and an optional setting to force Unicode filenames in the archive. But if you try to archive or unarchive files with characters in the name that can't be displayed by the current Windows codepage, then LabVIEW itself would catastrophally fail when I pass those names to the LabVIEW file IO functions. Also note that the same applies probably for Mac too, and Linux I don't even have an idea yet how to solve this. For the cRIO and Pharlap systems it most likely is not even an option. It's to bad that the LabVIEW developers didn't change the internal File IO API to use Unicode IO functions and extend the Path variable to support Unicode internally. Being a private datatype anyway there would be very little issues with backwards compatibility since whoever has relied on internal details of the Path datastructure would have been going out on his limb already.
  14. I'm not aware of any conditions that would disallow transfering the ownership of a LabVIEW license to someone else. And in most places such a provision in the license agreement would be null and void. However you need to make sure that you do get the ownership. Simply buying a CD ROM and a license serial number from anywhere might not really give you any rights. I would insist on a document that names the serial number, states the previous owner and also grants an irrecoverable transfer of ownership of that license and serial number to you. Otherwise you may try to register the software at NI and then get told that you are not the rightful owner and when disputing that, the previous owner suddenly may claim to still own the license. I do know for sure that NI does actually care about ownership and has in the past contacted us about licenses that we have originally purchased and sold as part of a whole project, when the actual end user did register the serial number as their own, since the registering entity did not match the purchasing entity.
  15. Indeed, the zlib library or more generally the zip addition to it do not use mbcs functions. And that has a good reason, as the maintainers of that library want to make the library compilable on as many systems as possible including embedded targets. Those often don't have mbcs support at all. However I'll be having a look at it, since the main part of the naming generation is actually done on the LabVIEW diagram, and that may be in fact more relevant here than anything inside zlib. There might be a relatively small fix to the LabVIEW diagram itself or the thin zlib wrapper that could allow for MBCS names in the zip archive.
  16. Ton definitely has pointed out one potential pitfal in your code. The Panel:Close? event always works for me. if you tell LabVIEW to discard it you have to make sure that your state machine then goes into a clean-up state, that eventually closes the front panel explicitly and then also terminates the state machine loop. If your front panel is the top level VI too, the code after the VI:Close method won't be executed in an executable as the LabVIEW runtime engine will be shutdown immediately after the last window is gone, but it helps when the application is run in the development environment. I have personally only used the Application:Close? event in deamon like applications that do not show a front panel by default. The proper operation when a user requests to shutdown the machine is that Windows sends a WM_CLOSE message to every application window, which will result into a Panel:Close? event in the LabVIEW VI, and then a WM_QUIT to the main instance which will trigger the Application:Close? event. But if you handle all the Panel:Close? events properly there should be no panel (hidden or not) left over at the time the Application:Close? event will trigger. On the other hand adding both Panel:Close? and Application:Close? to the state machine handling and going into a proper VI terminate state when you chose to discard that event, wouldn't hurt either.
  17. To be honest, I'm not sure about my position to Bitcoin itself. I tried to understand what it was about by visiting the forum there, but don't really got a clear picture. From some of the remarks there, it seems to be used by some folks in somewhat questionable ways. But then anything that represents some value in some form, even virtual currency or items in online games, is quickly attracting some folks who have more than questionable intent. So that alone is certainly not a criterium if the the idea of Bitcoin could be considered legitimate. However it seems to me some form of online virtual currency that only exists by the gratitude of people believing that it represents some value. That in itself is an interesting concept and in fact not so much different than even our official currencies we pay with all the time, and probably even more real than derivatives on stock exchange markets. However who controls the creation of that value? In other words how are Bitcoins created? And I'm not so much interested in the technical details here as is already mentioned in this thread by the formula, than the process and procedure that controls the creation. If it could be created by anyone in any number it would obviously loose all value.
  18. With this reasoning any law enforcement attempt would be useless. But as it seems it is usually quite effective without the need to get the death penalty invoked, and to my knowledge even in the US exist states where the death penalty is not even an option as means to enforce the law. And in this particular case, stripping the offender of the monetary gain and some additional fine is usually the modus operandi, with possibly jail for repetitive offenders. Death penalty for copyright violation would seem really archaic to me, and at least here it's definitely not an option. As to the fact that reasoning with stealing money or the work of some writer, certainly has an emotional component, claiming that stealing software or a book by copying it is both legally and/or morally right would seem emotional to me too already.
  19. You realize that the emotional aspect of this thread has really just started when you brought in murder!
  20. Well, copyright was in the first place invented to protect creations, such as writings or even paintings. Maybe some want to claim that anybody can copy a book for instance and distribute it as his own, but I doubt anyone really would want to make that claim seriously. Unless of course you think writing a book is trivial and can not be viewed as an ability and therefore an author has no right to gain anything from trying to sell it. But whoever does that should please start writing their own books first and make them available for free before even trying to make that claim. The application of copyright on software is in many ways flawed but it is the best we have generally come up with so far. It's definitely something not everyone can do, especially doing it good, so it would seem an ability that deserves some protection. Of course it would be nice if that would not be necessary since nobody needs to earn any money anymore as everything in this world is free and available to whomever needs it, but that is not how this world works, as we all know. So why would it be ok to copy software for free and deny the creator a decent income but not to steal money from the rich as they have more than enough of it? Does someone really need Billions of dollars? Do you really want to limit software creation to "free as in beer"? Or do you say that the decision to pay for a software should be voluntary, no matter if you use it to make profit yourself or even just for some leisure time? I think whoever makes such claims should first have a proven track record of providing his own creations under such conditions, before he or she has any right of speaking.
  21. Why do you ask? The right way for a real programmer would be to start working on it not to ask if someone will do it!
  22. I'm not sure but I don't think there is a good way. It may be possible in theory to use the Picture Control to display a PICC formated data stream and with lots of wizardry I'm sure it would be possible to store the PICC as a resource inside the VI itself and reference it from there but that would require so many hacks in the VI data itself that I would consider that a major work even for someone who has access to the complete VI data format specification. The OS clipboard likely won't work since it does not know about this type of format. Yes you can copy picc's between different LabVIEW objects over the clibboard and that is an interesting way to customize controls, but this is because LabVIEW maintains internally its own clipboard which knows about all kinds of internal formats including LabVIEW diagrams, paths, controls, and also PICCs, etc. but it does not transport those formats to the external clipboard and back but instead translates them to a format that the external clipboard officially supports, such as a bitmap when you copy a diagram and then try to paste that in another application including a different LabVIEW process. While on Windows the supported external vector image formats are WMF/EMF, on the Mac it used to be PICT which actually worked quite a bit better than WMF/EMF back then. Why not still use the PICC format? The code is there already, the internal control editor too (eventhough it badly needs some love). Adding another format like SVG may seem like a nice choice, but LabVIEW does not know about it so far, so it would have to be added, and it would not just be an importer/exporter but just about anything where PICC resources are used would have to be touched seriously. So basically anything in the frontpanel. Maybe if they decide to overhaul the control editor and make it a fully featured editor again, they would consider adding a new intermediate format like SVG but not vice versa. I also believe that much of the silver controls was simply done by using the control editor to create new custom controls, but I haven't looked at them yet. Still using classic controls everywhere for internal frontpanels and the system style for user interfaces almost exclusively. I feel the everchanging cry for newer, fancier, simpler, crazier, whatever controls is mostly hysteric. Whatever MS declares as the newest hit, people seem to want to have it, even if it hurts in the eyes.
  23. Where would you want to take such a picc? This format was created by the LabVIEW developers for LabVIEW and outside of the LabVIEW development group exist no editors or such who can create this format. I used to use WMF images in the past. But most WMF editors out there where abominable to say the least, and created images that produced rather different results depending on Windows version and viewer used. WMF supposedly is a little better but also a somewhat niche format that most Microsoft applications do support but don't advertize, and only very few other applications claim to support but with varying results. I had some interesting results by using Powerpoint to create the graphics. However eventhough LabVIEW has some support for importing WMF/EMF from the clipboard, it can be tricky to convince Windows/LabVIEW to take this format rather than have Windows advertize an intermediate Bitmap version of the WMF/EMF file, which will cause LabVIEW to use that instead. However I can not comment on how this would work in recent versions as my experiments with this date from LabVIEW 3 and 4 times. I'm pretty sure that this functionality did not receive any face lift since at least LabVIEW 5 because it is in fact an almost never used feature by anyone and NI doesn't spend resources on unused features. And it just as much could have bitrotted over the years, and not work anymore or even be removed entirely.
  24. As pointed out by others already, your car analogy has many flaws. For one you can't copyright a colour and even less so the right to change the colour of something. Copyright is about the right to copy something considered a creation in some form or the other. Except the design style, logo and such things there is little that could be copyrighted about a car. Patents is an entirely different thing again but even that is about copying something considered technical art in some form. Both do not apply to the fact if you can paint your car or not. A manufacturer could declare your warranty void or you could loose the right to do liability claims against the manufacturer, if he can make it reasonable believable that your paint job has caused the defect you seek warranty for, or the accidents you claim liabilities but nothing else. Software is an entire different beast. Most licenses, even open source, don't grant you ownership of the software, only a right to use it. It could be considered absurd, but then what ownership would you get? Can someone own bits and bytes, or the letters in a book? Do you own the letters in a book or rather just the paper of the book the letters are printed on? Are you allowed to destroy the book? Yes of course and so are you allowed to destroy the copy of the software on your harddisk or even the harddisk itself, as that is a material good you own. But you do not own the particular assembly of bits and bytes that makes up the software. For one thing because ownership means usually exclusivity but how can you own those bits and bytes exclusively? All material things can only be owned once at any particular time. A particular software exists unlimited times, so the classical property rules are very difficult to apply here. That is why all legal systems have rather adopted the copyright for distributed software than the property right, since many provisions of the property right simply do not apply to software programs at all. If it is a good choice I'm not sure, but adopting property rights instead would certainly not solve those problems but simply create others and possibly a lot more. As you get older you might find out that anything tends to have quantum mechanic inner workings, especially things that appear logically bipolar in nature. What is good and bad may seem very clear for simple minds, but if you go deeper you will find out that the decision about that is very relative and only really is made by the observer, but definitely not an universal police force called God.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.