Jump to content

Aristos Queue

Members
  • Posts

    3,183
  • Joined

  • Last visited

  • Days Won

    202

Everything posted by Aristos Queue

  1. QUOTE(chrisdavis @ Aug 29 2007, 08:46 AM) Query: Has NI Week ever had a presentation on using source code control with LabVIEW? It seems to come up as a topic of conversation regularly, and perhaps NI should have a presentation in the library for sales folks to be able to present on this topic. Would that be a good topic to suggest for next year?
  2. QUOTE(LV Punk @ Aug 29 2007, 06:42 AM) Good question! The answer is "no." I hadn't even paid attention to the second flat sequence... that's just my normal way of ignoring an error that I don't want to chain any further nor do I want to report. I could've just left the output unwired, but then the automatic error handling would've kicked in (yes, I could turn off automatic error handling on this VI, but it is my personal preference to leave that feature on ... I try to either handle the error or consciously ignore it as in this case. If I do neither, then the auto error handling flags my mistake. ) Notice that the To More Specific node could fail if the second class is a Leaf Node class. If that returns an error, then there's no point in doing the insert into the tree (we'd be inserting an empty node), so I use the error out to skip the execution of that next node. But beyond that point, there's no need to report an error. The Insert function itself cannot return an error of its own in this case, so we're not ignoring anything other than the type casting error. Let me repeat: The first sequence structure has nothing to do with memory allocation or deallocation. It has to do with whether or not the Bundle node executes. If the output of the Bundle node is unwired, then the node will become a no-op, which in this case would be a problem. QUOTE(Tomi) Well, the garbage collection algorithm is not documented. There is no garbage collection, so there's no algorithm to document. The data is simply left alone in the wire, and the next value to come down that wire will overwrite it. For details, take a look at the online help for the Request Deallocation primitive.
  3. QUOTE(Tomi Maila @ Aug 29 2007, 05:03 AM) This has nothing to do with the flat sequence. We just need a wire coming out of the Bundle node going *somewhere*. The easiest "somewhere" to drop is an empty flat sequence.
  4. QUOTE(NormKirchner @ Aug 28 2007, 08:54 PM) And since we all saw what happened the last time Norm got excited, I'm going to go ahead and post this now before it gets worse. ;-) Let's start with something easy... the case when we are setting a key-data pair into the map and we discover that the value is already in the map. Take a look at the block diagram of Map.lvlib:Branch Node.lvclass:Node Insert Pair.vi http://forums.lavag.org/index.php?act=attach&type=post&id=6788 Here we are taking the existing data and swapping it with the new value. What does the Swap primitive buy us? Why didn't I just cross those wires? In this case, I could have, but by using the Swap prim, I'm avoiding two copies of data -- one copy out of the Map structure and one copy into the map structure. No copying needed at all. We're playing with pointers in memory here, gentlemen and ladies. Directly touching memory in a way that LabVIEW has never done before. I point out this simple case since it'll help understand the far trickier case of deleting a node from the tree... For those of you without LV8.5, here is the block diagram for Map.lvlib:Branch.lvclass:Node Delete Key.vi which caused such consternation in previous posts... http://forums.lavag.org/index.php?act=attach&type=post&id=6789 What in Hades is going on here? We are implementing a binary tree. That means that when we delete a node, we now have a left node and a right node. One of those gets put into the tree to take the position of the deleted node. The other must be inserted into the tree as if it was a new node. We want to do #1 WITHOUT MAKING A COPY OF ANYTHING. Why? Because a single node really is an entire sub-tree. We want to simply connect that subtree into a new place in the graph without duplicating all the pointers. Otherwise all of our efficiency is lost. The node that we're going to delete has the left and right subtrees as member data values. When the deleted node reaches the end of its wire, it will disappear from memory AND WILL TAKE ALL OF ITS MEMBER DATA WITH IT. So the first thing to do is to give the node about to be deleted some new values for its left and right nodes. We use the Leaf Node constants. The Force Copy primitives (those little small dots for those of you without LV8.5) keeps LabVIEW from trying to be too smart about optimizing copies. It guarantees that we have an independent value of a Leaf Node, one for left and one for right. We Swap those two in place of the current subtrees of the node being deleted. So our left and right subtrees are now disconnected from the main tree. Why is the Sequence Structure there? Because if we don't wire the output of the Bundle node, LabVIEW will look at the diagram and say, "Hey! This output is unwired. That means the node is a no-op." And it will optimize out the bundle. But in this diagram, even though we're never going to use the deleted node ever again, it isn't a no-op -- it is what severs the deleted node's connection to the left and right subtrees. (Note: After further questions, I posted a clarification of this point here.) Now we have two subtrees. We test to see which one is the deeper tree and swap the wire values so that the deeper one is on the top wire. That one gets to be put in place of the deleted node because it keeps the maximum number of nodes at the highest possible level of the tree. The shallower subtree is then inserted into the map as if it were a new node. Now, let's consider the implications of this entire VI hierarchy: There are points in this graph where we are modifying a value on one wire which results in a change in value on another parallel wire. This is a massive no-no for dataflow programming. And yet it works in this situation -- reliably, predictably, and quickly -- without any semaphore or mutex locking. It is the sort of thing that a human being can prove is safe by code inspection but no amount of automated work from the compiler can confirm that (at least with the compiler tech that I have at my disposal -- the NI AI is still holding a grudge against me for introducing it to Goedel's Incompleteness Theorem). If you coded it wrong, you could crash LabVIEW. Should I then close this door in the compiler? I don't think so. First of all, you can't do any of this without the memory management primitives. Anyone trying to make this work without the Swap will find themselves coding regular LabVIEW and completely unable to get the map to work the way they'd like it to work. Anyone who uses the Swap primitives is definitely on the cutting edge of LabVIEW, because it is so non-intuitive compared with simply crossing the wires. And if you're advanced enough to be playing with these, then I think the full power of options should be open to you. This hierarchy gives you a Map that is perfect dataflow outside the class --- if you fork the wire, the entire Map duplicates and is thus dataflow safe for all operations, and it can be freely stored in LV2-style globals, global VIs or single-element queues if you decide you need a reference to one. It hides all the "pointer arithmetic" inside its private member VIs, thus guaranteeing the coherency and consistency of the Map and keeping the dangerous dataflow-unsafe functions from being called directly under conditions that might not be safe. I'm thinking an essay covering the analysis of this diagram and the related VI hierarchy should be added to the Certified LabVIEW Architect exam. Now, my next challenge to everyone... Do any of you recall me posting this request for a hack? The suggestion to use MoveBlock is an excellent one. You'll need that in order to complete the next challenge... Write a VI Hierarchy to implement an arbitrary cyclic graph in pure G. To VIs not in the Graph.lvclass hierarchy, the graph objects should be perfect dataflow -- if you fork the wire, the entire graph replicates. The exposed API should allow you to create a connection between arbitrary nodes in the graph, including the creation of cycles. Inside the class, you may do many dataflow unsafe activities, far worse than the moment to moment crimes of the Map. I have no idea if this is even possible. This is what we call "research." It's what I spend my spare time on. :-) Good night, and good luck.
  5. QUOTE(Gary Rubin @ Aug 28 2007, 09:19 AM) I know that the thread hijacking objection has been raised, but a good answer was given on the main topic, so I just want to note this: If you accidentally hit ctrl+t and tile your windows, you can use ctrl+z to undo that change. It won't undo the block diagram, but your carefully sized front panel window will be restored. Not everyone notices that sizing the panel is an undoable operation, so I thought I'd mention that.
  6. I'm going to come down pretty negative on this idea. I hope it is apparent that I'm objecting on technical grounds and not personal ones. Yes, LabVOOP is my baby, and it can be easy for me to get defensive about design decisions. I've taken the time to really look at jacedom's suggestion, and to make sure my mind is clear of any prideful predjudices. I think I can clearly reject this idea on technical merit. Why am I prefacing this? Because this discussion is over text, which doesn't convey conversation well at all. So I'm going out of my way to say that I appreciate users who contemplate and suggest alternate implementations. I really do. But I'm going to be very direct in my objections to this idea. I want everyone to understand that it is the idea that I'm knocking, not the person. Why do I say all this? Because I like being able to provide feedback on customer suggestions, to give the background details on why LV won't be implementing it, but I have found that not all customers take it well, and I've learned to just say "interesting idea, we'll think about it" whether the idea is good or bad so as to not give any feedback whatsoever. But jacedom's is a worthy suggestion that I'd like to respond to in full, because of the unique tack it takes for language design. I have no interest in starting a flame war, and I hope my comments are taken in that light. With all that in mind... QUOTE(Jacemdom @ Jun 15 2007, 09:34 AM) Even if you have such a central architect, you can't expect the central architect to do all the implementing. As various members of a team create various branches of the heirarchy, you have them integrating their changes into a central single VI. Even with the graphical merge of LV8.5, that's still creating a lot of contention for that VI. As time goes by, you may have a large enough hierarchy that no single person is even capable of describing all the possible classes -- I know of many hierarchies like this in C++ code, and one getting close to this already in G code. There would be many hierarchies that would easily expand beyond a single screen in their complexity. Also, what about when there is no "team" at all? How does a LV user extend a class developed by NI? How do multiple OpenG developers collaborate to extend each other's work? The design that requires such a central repository for the hierarchy necessarily limits the hierarchy extension only to those who can edit the hiearchy file. If Albert develops a class hierarchy and gives it to both Bob and Charlie, under your scheme, if Bob and Charlie both develop a new class (by editing the central cluster) they cannot deploy on the same machine since their clusters would be unable to be in memory at the same time. Further (and this one is critical) there would be no way for the parent class to be redesigned without impact on the child classes. The parent needs to be an independently replaceable unit. That allows for the complete overhaul of the private portions of that class without disturbing any of the child classes. Indeed, with the current LV scheme, the parent can be replaced even in a run-time engine without requiring the children to recompile at all. Although there are merits to your single cluster concept as a deployment step for optimization, as a development environment I just can't see it as viable at all. QUOTE This architecture would basically consist of a hierarchy of domains and associated actions to be performed on them, by entities of ... <snip> ... principal architect would consist of generating the overall domain architecture and then concurrent development could start. All of the above is a significant organization of *people* required to make the software architecture viable. That creates significant barriers to usage. QUOTE This Methodology is currently under use and as been under development for the last 7 years. It as proved to clarify, accelerate and enhance the development, while keeping inline with the original dataflow implementation. The functionalities discussed in AnotherVIEW could allow to push this way even further. Within a single organization, a Central Architect system can be very viable, but there are many other styles of programming team that are just as effective. You work in the Cathedral but others work in the Bazzar. I do not see how the single cluster concept makes an all-powerful architect's job easier; I do see how it makes the communal developers' work nigh on impossible. QUOTE As the proposed approach separates clearly data from functions and that the data cluster type def is the structural equivalent of a class, this means that yes all classes would be in memory, but what would that memory cost be, if all data contained mainly consisted of null/empty values? The memory cost is all the implementations of all the functions that are never invoked, possibly including some very large DLLs. Tying all the classes together would make many architectures I've already seen with LabVOOP not available at all. The core software can install a hierarchy. A new module can come along later and install an extension. This is the basis of the new Getting Started Window in LabVIEW 8.5. In fact, many modules can install extensions. Having all possible modules in memory is a major size bloat and is not worth it if you don't use those modules. With the Getting Started Window, the classes are small, so even if you have every possible module installed, the impact is small, but the same type of architecture can be applied to many cases, and in a lot of these cases the effect would be devastating. Take the error code cluster refactoring that I posted to the GOOP forum a couple months back. There may be specific apps that want very complex error handling routines that load all sorts of external DLLs for graph and chart displays or e-mail communication. These should not be loaded every time the user brings General Error Handler.vi into memory. There are many hierarchies where every single operation is defined for every single class in the hierarchy. I'm willing to bet that it is way more common to have a method at every level than for the majority to be empty. QUOTE It would basically only leave the clusters definitions in memory, would that be significant in todays multi gigabytes of ram systems, even in multi megabytes systems? You could still load the functions dynamically, significantly reducing the memory usage, as the majority of bytes reside in function definitions vs data container(TD) definitions. What gigabytes of ram? The RT targets have 64k. Or 8k. An FPGA fabric is very limited. Hitting a PDA requires a thoroughly stripped down runtime engine. QUOTE QUOTE3) The parent implementations would be open and visible to the child implementations. You'd lose the independence of separating implementation from interface. Is this valid considering that in dataflow the data is naturally separated from the functions, in contrast as in OO design where methods and properties are merged in one object? Ok. This is a completely bogus argument. And it is the root misunderstanding of all the by-reference vs by-value debate. Let's get something clear, everyone: In C++, if I declare an object of type XYZ like this: XYZ variable; The language does not suddenly clone all the functions in memory so that this object has its own copy of all the functions. The functions are still separate from the data insofar as the assembly code for them occupy the same region of memory whether I have one instance of the class or 1000 instances of the class. The ONLY merging of functions with data is in the header file. Which is equivalent to the .lvclass file. The binding between data and functions is EXACTLY THE SAME between JAVA, C++ and LabVIEW. And Smalltalk. And just about any other OO language you'd like to name (I would say "all other OO languages", but I leave room for someone having developed a language such as Befunge for OO). Yes, my argument 3 is valid. Very much valid. Any time you have children being designed such that they depend upon a particular implementation of the parent you have a violation of the most basic tennet of OO, encapsulation of data. QUOTE Therefore creating private data. See previous post on why ever having public or protected data is a very bad idea. I don't care that you *can* create private data under your scheme. I object to the idea that you *can* create public data. The default direction is really not under contention here. You can default it to public or default it to private -- but the fact that it can ever be set to public, whether as the default or by deliberate change, is bad. QUOTE QUOTEWe do need to make the process of creating accessor VIs simpler. Is the added debugging complexity also being worked on, specially probing, that I believe to be a drawback of the chosen implementation? I believe that the ability to follow and look into the wire as been one of the main strength of LabVIEW and loosing that decelerates my ability to write working tested code. I think you changed topics here... give me a second... When you say "the chosen implementation", are you referring to the need to create accessors? When I first read this, that seemed to be what you're referring to here. That I would disagree with. The debugging challenge is dramatically simplified by requiring the accessor VIs because you have a bottle neck to catch all data value changes as they happen, rather than trying to set breakpoints and probes in places scattered throughout the code. But on re-reading, I think you're actually asking about the ability to display in the probe the full child data when the child is traveling on a parent wire. That is a feature I've asked my team to work on. It does pose quite a challenge, and I wouldn't expect to see it soon. But, having said that, I have yet to see much call for it. The majority of the time, if I'm debugging a parent wire, it is the parent's data cluster that I care about. The child's cluster doesn't get changed by parent operations and is rarely of interest at that point in the code. So, yes, it is an interesting problem worthy of attention, and there are cases where it would be useful. But I've spent the last year looking over the shoulders of LVClass developers, and I haven't seen this create an impediment to development. This isn't the same level of impediment as, for example, the Error Window feedback. QUOTE Does this mean that standardizing everything to this idea could simplify the architecture of LabVIEW itself? Could you have dynamic loading on those platforms, if dynamic loading only consisted of dynamically loading functions? No. You couldn't have dynamic loading at all. The whole point is to use this for targets such as FPGA where there is only a single deployment to the target and all possible classes are known at compile time. SUMMARY: In short, the central repository of data definition for an entire hierarchy is, in my opinion, unworkable for development. It is a useful concept for deployment only. Tying an entire hierarchy together limits extensibility and places restrictions on the types of software teams that can work on the software. I hope all the above makes sense.
  7. QUOTE(DanielChile @ Aug 28 2007, 11:51 AM) No such property exists in LV8.5.
  8. Offering help on this scale over discussion forum would be a tall order. I doubt that the LAVA folks are going to be able to help much, not because we're unwilling to help but because there's really no way to just teach LV over the discussion forums. Isn't there anyone who knows LV in your physical vicinity who might be able to help you? We're going to be more able to comment on a particular block diagram or a proposed VI hierarchy, but at this stage of your project, your question is sort of like landing on a new planet and asking us to brainstorm all the possible constellations from the stars in the sky. Too much space to cover and no clear place to begin. But if you get to the point where you have studied the stars in the heavens and are ready to propose a set of constellations, we can kibitz on that as to whether the constellations are recognizable and how well they reflect the LV mythology. (Have I stretched this metaphor too far? Perhaps, but you get the ... ahem ... picture, I hope.)
  9. QUOTE(Jim Kring @ Aug 24 2007, 01:09 PM) Ok, Jim. Let's take a deep breath and start from the top. First, is the computer plugged in? Seriously... the Open VI Reference from LLB is used on every single test of the nightly autotest suite. It works. Not sure what's wrong with your code, but that definitely works.
  10. QUOTE(Val Brown @ Aug 24 2007, 10:00 AM) *chuckle* I'm sorry... reading this (and the other posts) I can't help thinking of Monty Python and the Holy Grail. "Help! Help! I'm being repressed!" You're in no danger whatsoever of classes being forced on you by the removal of clusters. Nor any other method of moving you toward classes by NI. The only thing that will compel you toward classes will be the awe-inspiring beauty of the coherent libraries of VIs that your peers produce in the next few years and the shame that you feel when you compare it to your own VI hierarchies. :worship: Why should we try to force you to use classes when your own base desires (for good code and sustainable designs) will draw you inevitably toward it? Tangent A: C++ does not compel the use of classes. You can backslide into C anytime you want. The C++ compiler accepts all C syntax. Tangent B: G as a pretty IDE for C++??? What a HORRIBLE vision!! Have you *seen* C++? It is more of a hack than a language. Like Darth Vader, C++ "is more machine than man now." My hat goes off to the hackers who designed it... there are amazing amazing aspects to it. But for arcane syntax, it wins over just about every language I've ever seen. G should not be a pretty IDE for any of the traditional programming languages. What it should be is a pretty IDE for expressing human concepts and needs to the CPU of the machine, in the most elegant, efficient and intelligible way possible... which is why you'll eventually _want_ classes.
  11. Washington D.C. or the state of Washington?
  12. QUOTE(i2dx @ Aug 23 2007, 03:41 PM) LabVIEW 8.5. Read the Upgrade Notes on "recursion". QUOTE(Justin Goeres @ Aug 23 2007, 10:40 AM) This reminds me of AQ's old story about the "feature" where LabVIEW would crash if a wire greater than 65,536 pixels in length had a bend in it (or something along those lines). The full story is this... a user filed a bug report that "If I popup on a wire and select 'Clean Up Wire', if the wire is over 16k pixels long, LabVIEW just deletes the wire." I rejected the bug report on the grounds that LabVIEW had done the right thing to clean up the wire.
  13. Jim: Query... once the VI is loaded into memory, thereby loading the class, can you give a path to the Get Default Instance.vi that is just the class' name and have it work successfully? I can't look at the code right now, but my memory of how it is implemented suggests this might work. -- Stephen
  14. QUOTE(MikaelH @ Aug 23 2007, 04:59 PM) Hm... I'd consider that a bug. Just as an int can have a specified value of 23, you ought to be able to fully specify the default value of the BaseClass. I wouldn't allow the UML to specify all the fields (after all, the UML didn't go through the class' API to set those fields, so you might be coding an inconsistent class state). But if the fields are already set, the UML shouldn't overwrite with a default instance. In this particular case, the default instance is what is used, but that won't always be true. At the very least, you should compare the existing value against a default instance before scripting over it and post a warning that the data has been changed to default default value. QUOTE(Tomi Maila @ Aug 23 2007, 11:34 AM) Was it even close? Answers will be posted over the weekend or possibly on Monday. :ninja:
  15. This issue has been reported to R&D (4CML9UJ1) for further investigation. I've marked the CAR with screaming high priority. It's a test case that just got missed. Damnit.
  16. QUOTE(jaegen @ Aug 23 2007, 10:50 AM) The "VIs not inside the EXE" bug (and, yes, I do consider it a bug) remains in LabVIEW 8.5. It is on the "must fix" list in the next LabVIEW version. The end goal is to be able to store multiple files of the same name inside the EXE without mangling them. QUOTE(Michael_Aivaliotis @ Aug 23 2007, 12:10 PM) until NI forces us to use classes (which they probably will). No. NI probably won't. There are very few believers within the walls of NI.
  17. QUOTE(Tomi Maila @ Aug 23 2007, 05:35 AM) http://en.wikipedia.org/wiki/AVL_tree a) the show buffers tool works fine for LV classes, and all those dots are places where buffers are allocated, but you need to know that allocating a new LVClass is dirt cheap -- 1 pointer in all cases since there's only a single shared copy of the default default value. b) As for explanation of the various swap nodes and force copy blocks, read the LVOOP white paper and focus on the section titled "What is the in-memory layout of a class?" and the paragraph that starts with " For advanced LVOOP programmers there is a counterintuitive feature that we considered removing" QUOTE(MikaelH @ Aug 23 2007, 01:06 AM) BTW, how do you set the default data to a leaf Node in the Map class attribute named Root which is a Map Node Object? In 8.5, drop a constant of class Child and wire it to an indicator of class Parent. Run the VI and then use Make Current Value Default. Change the indicator to a control and move it into the private data control. Make Current Value Default does not work for LV classes in LV8.2. PS: This may be a situation that your UML tool should be aware of ... Class A may have a control of class Parent in its private data, but it may use an even lower Child class as the default value of that control. I don't know if you care to highlight such a situation in the UML, but it is a relationship that LV has to track in order to know to load the child class into memory whenever Class A loads.
  18. Here's my attempt at implementing Map in LV8.5. Very straightforward. I need someone to write the tree balancing algorithm on one of the subVIs (Map.lvlib:Branch Node.lvclass:Node Insert Pair.vi). I don't have the patience to hunt down the AVL-tree balancing algorithm and make it into LV code. You'll find some *fascinating* code that definitely only works in LV8.5 (uses the new inplaceness nodes) on the block diagram of Map.lvlib:Branch Node.lvclass:Node Delete Key.vi. 20 points and a bowl of gruel to the first person who can explain why the *empty* Sequence Structure is labeled "Do not delete". ;-) NOTE: The attachment got deleted when LAVA was wiped out in early 2009. Here is a link to it: http://lavag.org/topic/5983-map-implemented-with-classes-for-85/page__view__findpost__p__70238
  19. If you have comments about the *overall* nature of LabVIEW classes, please post them here. Please don't post feature requests here. I'm interested mostly in getting a feel for adoption rates. I'd like to know how secure everyone feels doing professional development with LabVIEW classes. If there is a particular sticking point (other than you haven't upgraded at all yet) that is keeping you from developing with classes, I'd be interested in that.
  20. The Get Default Value VI is new in 8.5. In 8.2, there was a property on an LVClass Library reference that would get you the default value. This was depricated because it does not work in the runtime engine. The subVI in 8.5 will work in the runtime engine. QUOTE(Jim Kring @ Aug 22 2007, 05:19 PM) The documentation says that all LVLibrary properties and methods do not work in the run time engine. Perhaps a specific method or property got mislabeled, but the top level docs are pretty clear on this point. There was some misinformation posted to LAVA by me early in the 8.2 release because I thought that the properties/methods were available in the run time engine. That lack of functionality lead my team to develop the 8.5 subVI.
  21. Please file this with the NI Product Suggestion Center.
  22. mane: Here's the post that covers specifically what you're doing: http://forums.lavag.org/index.php?showtopi...ost&p=34203 QUOTE No, it won't be allowed someday. As I explained in the other thread, that would be a major violation of encapsulation to provide such a generic casting mechanism. You're free to write such a conversion tool (as shown in the post above) for making children out of parents. But LV (nor any other OO language) will never provide such functionality directly. (Again, see the other thread for details about why.)
  23. QUOTE(Paul_at_Lowell @ Aug 21 2007, 02:54 PM) The problem also occurs if you call Datasocket Open using a Call By Reference node. The problem is not restricted to LV classes. It is restricted to any dynamic invocation of the VI containing Open Datasocket primitive. The wrapper workaround works for this case as well.
  24. QUOTE(Pana-man @ Aug 20 2007, 04:52 PM) And therein lies the design flaw. The parent doesn't even know it has children. All the VIs in the parent class are written in terms of nothing but the parent fields and the parent methods. Those VIs assume that the data on the wire is some type of Parent, but they can't know anything more specific. Child data may actually be on that Parent wire at run time, but it is behaving as its parent. An example: Class Shape has data "anchor point" and four member VIs: Get Point, Move, Draw, and Tesselate. Tesselate simply does an iterative "move and draw" over and over to fill in a space. The entirety of Tesselate is written in terms of Shape. At run time, a Circle may come down the wire. Circle has data "radius". It inherits an anchor point from its parent, Shape. Circle can't access that point directly, but it can use Get Point. Circle defines its own Draw method. Now when Circle is passed to Tesselate, the Tesselate function moves the circle, draws it, and repeats over and over... but the entire time it is not accessing any specific data of Circle. Only when the dynamic dispatch to Draw is done does anything specific to Circle get invoked.
  25. I really want to know... what sound did your keyboard make? What is the sound of one hand typing???
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.