Quenty
|
  |
| Joined: 03 Sep 2009 |
| Total Posts: 9316 |
|
|
| 25 Apr 2012 08:01 PM |
Why would deleting one, unarchived, blank model create a 3 second delay on ROBLOX?
Anyone know? Also, any performance tips? (Coding wise?)
Also, is ROBLOX suppose to use 233 MB just when it's in open (No game, no browser, no nothing)? |
|
|
| Report Abuse |
|
|
LPGhatguy
|
  |
 |
| Joined: 27 Jun 2008 |
| Total Posts: 4725 |
|
|
| 25 Apr 2012 08:04 PM |
| As far as the 233 MB, I'm assuming (big assumption here) that ROBLOX pre-allocates a bunch of memory at startup (and maybe doesn't release it) to make things faster. |
|
|
| Report Abuse |
|
|
aboy5643a
|
  |
| Joined: 20 Nov 2010 |
| Total Posts: 2785 |
|
|
| 25 Apr 2012 08:09 PM |
"faster" ololololol nice joke |
|
|
| Report Abuse |
|
|
|
| 25 Apr 2012 08:09 PM |
| Or there's a memory leak somewhere? Seriously, I can't spend over 1 hour in Mac studio because it just stops working. It's not that it becomes unresponsive but it stops taking input from my keyboard and mouse. |
|
|
| Report Abuse |
|
|
myrkos
|
  |
| Joined: 06 Sep 2010 |
| Total Posts: 8072 |
|
|
| 25 Apr 2012 08:10 PM |
| Memory leaks are overrated. I'll leave it at that. |
|
|
| Report Abuse |
|
|
LocalChum
|
  |
| Joined: 04 Mar 2011 |
| Total Posts: 6906 |
|
|
| 25 Apr 2012 08:47 PM |
| Roblox thinks that not using delete or free after they use new or malloc is the new and cool way of doing things. |
|
|
| Report Abuse |
|
|
jode6543
|
  |
| Joined: 16 Jun 2009 |
| Total Posts: 5363 |
|
|
| 25 Apr 2012 08:56 PM |
@Myrkos Well, they are overrated if you don't think they are. But if you think they are overrated, then you won't bother looking for them, and you'll end up with an exponential amount of unused, inaccessible allocated memory. However, one or two memory leaks can't crash a system or do much real harm, so as long as you try to avoid memory leaks, it won't be that big a deal.
-Jode |
|
|
| Report Abuse |
|
|
|
| 25 Apr 2012 10:39 PM |
The solution:
Don't use new and don't use malloc either!
Nah, kidding. |
|
|
| Report Abuse |
|
|
LPGhatguy
|
  |
 |
| Joined: 27 Jun 2008 |
| Total Posts: 4725 |
|
|
| 25 Apr 2012 11:12 PM |
@aboy5643a Remember, ROBLOX is aiming to run on ancient hardware. Memory speeds can be a bottleneck for ancient computers. |
|
|
| Report Abuse |
|
|
Legend26
|
  |
| Joined: 08 Sep 2008 |
| Total Posts: 10586 |
|
|
| 25 Apr 2012 11:14 PM |
| Do you have the browser open when you were deleting the model? It could be caused by that notorious plugin that love to freeze stuff. |
|
|
| Report Abuse |
|
|
Quenty
|
  |
| Joined: 03 Sep 2009 |
| Total Posts: 9316 |
|
|
| 25 Apr 2012 11:42 PM |
No, I closed it. I even disabled plugins. Seriously, my game takes up another 300 MB of information. :/
I'm using a 6 GB computer, w/ ready boost on a 4 GB flashdrive (Allocated about 3.5 GB).
You would _THINK_ that deleting one part would take longer then 1-3 seconds. :/
|
|
|
| Report Abuse |
|
|
Quenty
|
  |
| Joined: 03 Sep 2009 |
| Total Posts: 9316 |
|
| |
|
|
| 25 Apr 2012 11:53 PM |
Performance tips, you say?
Haha, I'm reading that book by Roberto and the other creators of Lua. It's incredible how much stuff you can learn in there about optimizing code.
"Usually, you do not need to know anything about how Lua implement tables to use them. Actually, Lua goes to great lengths to make sure that implementation details do not surface to the user. However, these details show themselves through the performance of table operations. So, to optimize programs that use tables (that is, practically any Lua program), it is good to know a little about how Lua implements tables. The implementation of tables in Lua involves some clever algorithms. Every table in Lua has two parts: the array part and the hash part. The array part stores entries with integer keys in the range 1 to n, for some particular n. (We will discuss how this n is computed in a moment.) All other entries (including integer keys outside that range) go to the hash part. As the name implies, the hash part uses a hash algorithm to store and find its keys. It uses what is called an open address table, which means that all entries are stored in the hash array itself. A hash function gives the primary index of a key; if there is a collision (that is, if two keys are hashed to the same position), the keys are linked in a list, with each element occupying one array entry. When Lua needs to insert a new key into a table and the hash array is full, Lua does a rehash. The first step in the rehash is to decide the sizes of the new array part and the new hash part. So, Lua traverses all entries, counting and classifying them, and then chooses as the size of the array part the largest power of 2 such that more than half the elements of the array part are filled. The hash size is then the smallest power of 2 that can accommodate all the remaining entries (that is, those that did not fit into the array part). When Lua creates an empty table, both parts have size 0 and, therefore, there are no arrays allocated for them. Let us see what happens when we run the following code: local a = {} for i = 1, 3 do a[i] = true end It starts by creating an empty table a. In the first loop iteration, the assignment a[1]=true triggers a rehash; Lua then sets the size of the array part of the table to 1 and keeps the hash part empty. In the second loop iteration, the assignment a[2]=true triggers another rehash, so that now the array part of the table has size 2. Finally, the third iteration triggers yet another rehash, growing the size of the array part to 4.
A code like a = {} a.x = 1; a.y = 2; a.z = 3 does something similar, except that it grows the hash part of the table. For large tables, this initial overhead is amortized over the entire creation: While a table with three elements needs three rehashings, a table with one million elements needs only twenty. But when you create thousands of small tables, the combined overhead can be significant. Older versions of Lua created empty tables with some pre-allocated slots (four, if I remember correctly), to avoid this overhead when initializing small tables. However, this approach wastes memory. For instance, if you create millions of points (represented as tables with only two entries) and each one uses twice the memory it really needs, you may pay a high price. That is why currently Lua creates empty tables with no pre-allocated slots.
[ ... ]
When programming in Lua, you may use constructors to avoid those initial rehashings. When you write {true, true, true}, Lua knows beforehand that the table will need three slots in its array part, so Lua creates the table with that size. Similarly, if you write {x = 1, y = 2, z = 3}, Lua will create a table with four slots in its hash part. As an example, the next loop runs in 2.0 seconds: for i = 1, 1000000 do local a = {} a[1] = 1; a[2] = 2; a[3] = 3 end If we create the tables with the right size, we reduce the run time to 0.7 seconds: for i = 1, 1000000 do local a = {true, true, true} a[1] = 1; a[2] = 2; a[3] = 3 end If you write something like {[1] = true, [2] = true, [3] = true}, however, Lua is not smart enough to detect that the given expressions (literal numbers, in this case) describe array indices, so it creates a table with four slots in its hash part, wasting memory and CPU time."
Impressive.
That means {true, true, true} runs faster and wastes less memory than {[1] = true, [2] = true, [3] = true}. That means they're not exactly the same thing.
It also means you can prevent useless rehashings by giving a starting size to a table by filling it with certain values to give it a certain starting hash size or array size, or even both. |
|
|
| Report Abuse |
|
|
|
| 25 Apr 2012 11:55 PM |
"The size of both parts of a table are recomputed only when the table rehashes, which happens only when the table is completely full and Lua needs to insert a new element. As a consequence, if you traverse a table erasing all its fields (that is, setting them all to nil), the table does not shrink. However, if you insert some new elements, then eventually the table will have to resize. Usually this is not a problem: if you keep erasing elements and inserting new ones (as is typical in many programs), the table size remains stable. However, you should not expect to recover memory by erasing the fields of a large table: It is better to free the table itself."
From now on, everytime I remove some elements from a table, I'm going to add another and immediately remove it just so the table can resize. >:3
Well, by that, I mean if there are enough elements so it's worth it.
And everytime I have to clean a table, I'll just create a new one and let the garbage collector collect the old one. |
|
|
| Report Abuse |
|
|
|
| 26 Apr 2012 05:18 AM |
| Roblox uses 2?? mb at startup because it uses ugly slow windows-y gui components, duh! |
|
|
| Report Abuse |
|
|
Bubby4j
|
  |
| Joined: 25 Dec 2008 |
| Total Posts: 1831 |
|
|
| 26 Apr 2012 08:51 AM |
@FlashDriveReadyBoostGuy Isn't a hard drive faster than a flash drive? I don't see the point in windows readyboost. |
|
|
| Report Abuse |
|
|
|
| 26 Apr 2012 09:01 AM |
| no flash drive is faster i think. Not sure about how fast it can read continuous memory, but flash drives definitely find the right spot quickly, and since the data is spread all around the place the mechanical drive would need to move its slow head which takes liek multiple milliseconds to find the right spot or something. |
|
|
| Report Abuse |
|
|
Bubby4j
|
  |
| Joined: 25 Dec 2008 |
| Total Posts: 1831 |
|
|
| 26 Apr 2012 02:08 PM |
| The seek time may be faster, but the read (and probably write also) time is considerably slower. You would probably only get a performance increase when reading a lot of very small files. |
|
|
| Report Abuse |
|
|