Polski

Apparently, users can fill any disk resource. But not in the case of CEPH!
This disk system is built on individual multi-drive servers (nodes) connected to a network. With the addition of subsequent nodes, the system capacity increases linearly and in addition, its efficiency increases. The only limitation is your budget and space in the server room.

Our company has done the production deployment of a very efficient disk cluster using the CEPH system as a shared storage area for the computing cluster. The cluster with a capacity of 1 PB and resistant to failure of any node can reach transfer speeds in excess of 10GBps.

CEPH is based on a distributed structure consisting of individual nodes with drives interconnected with a network. We can increase the capacity and performance of our system by adding another node to the system. Productivity growth results from a distributed writing and reading of data across multiple nodes and disks. CEPH offers the capability of a distributed file system, as well as block and object access.
CEPH Filesystem, or a distributed file system, enables to simultaneously write and read from multiple client nodes. CEPH client is compiled into Linux kernels since version 3.10.
CEPH Block Device enables to create images that can be compared to partitions or LUNs in an array. You can run any file system on each of them and add it to the existing system or run the new system with virtualization.
CEPH Object Gateway is the object-oriented gateway with API that provides the access to the disk cluster. API provides compatibility with Amazon S3 and OpenStack Swift cloud-based services.

CEPH enables to snapshot our systems by creating backups of our data. In addition, it also allows the replication of stored objects, which provides fault tolerance for disks, nodes, and even server racks with the appropriate configuration.
HP SL4550 server and its generation-9 successors are ideal machines for such applications.
Disks are placed on the entire depth of the rack and enable to obtain the capacity of more than 3 PB in a single rack.
We will publish more information about our implementation in the next entry.