Why is there such a non-standard capacity memory card as 80GB / 320GB?

 For a long time in the past, the capacity of memory cards was the N power of 2, such as 2GB, 4GB, 8GB, 16GB... But after CFast, XQD, and CFexpress cards appeared, 80GB, 120GB, 240GB appeared on the market, 325GB... and other "non-standard specifications". What is the reason behind this?

Both memory cards and solid-state drives use flash memory particles (NAND Flash) as the storage medium, and the flash memory particles themselves have a limit on the number of times they can be erased (ie, write life). In order to balance the wear degree of different blocks (blocks) of flash memory particles and prolong the service life, some reserved space is needed - this is the first layer of OP space (Inherent OP).

Almost all memory cards and solid-state drives will have a capacity difference due to the first-tier OP space. The usable capacity of a 128GB memory card is about 118GB, and the usable capacity of a 512GB memory card is about 474GB... This difference is not a fixed value, but in general, it will be around 7%. Probably because the mechanical hard disk will also have a difference of about 7% due to the 1000 base and 1024 base, so the manufacturers will not change the marked capacity of the product because of the first layer of OP space so, the first layer of OP space The existence of 80GB, 120GB, 240GB can not explain why there are products with nominal capacity.

Reserve space is not only about lifespan, its size also affects storage performance (especially random reads and writes and low available capacity). Therefore, some manufacturers will take a part of the remaining capacity this is the second layer of OP space (Factory-set). Because of the lack of some capacity, manufacturers have to adjust the nominal capacity: so 120GB is derived from 128GB, 240GB is derived from 256GB, and 500GB and 480GB are derived from 512GB.

Because of the OP space of the second layer, there are products with "non-standard" capacity such as 120GB, 240GB, and 480GB. Whether it is a standard capacity such as 64GB, 128GB, and 256GB, or a non-standard capacity such as 120GB, 240GB, and 480GB, there will be a difference between the nominal capacity and the actual available capacity due to the first-tier OP capacity.

The main development direction of flash memory technology is higher density and lower unit price. Compared with this, speed and lifespan are actually ranked second. From SLC and MLC to the current TLC and QLC, the capacity is large and the price is low, but it also brings the problem of "speed drop" - when more data is continuously written, the writing speed will drop off a cliff. Obviously, the nominal speed of the memory card exceeds 1000MB/s, and the result is that it cannot continuously record 2600Mbps (325MB/s) video, which is caused by the speed drop.

In order to solve this problem, some memory card manufacturers use a working mode called "full-disk pSLC" (p=pseudo, which means pseudo/false, also translated as "simulated SLC"), which is to put the current mainstream TLC flash granules are used as SLC granules. This does double the write speed, and there is almost no "dropping" problem, but at the cost of only 1/3 of the available capacity.

It must be emphasized that there are also products on the market that use a "partial pSLC" model. For example, the nominal capacity of a product is 80GB, the system display specification is 86GB, and the actual measurement will drop after the continuous writing exceeds 16GB. It is speculated that it sets the 48GB space to pSLC mode and obtains a space combination of 70GB low speed + 16GB high speed (the actual usable capacity is close to 80GB). This means that after continuous writing of more than 16GB of data, the speed of clearing the cache will be greatly reduced, and it is prone to jams, and high-bit rate video shooting may also be interrupted.

Post a Comment

0 Comments