![]() ![]() Here is how FreeBSD system sees these drives by camcontrol(8) command. The disks distribution will look more or less like that. With 12 disks in each RAID6 (raidz2) group – there will be 7 such groups – we will have 84 used for the ZFS pool with 6 drives left as SPARE disks – that plays well for me. Zroot 220G 3.75G 216G - 0% 1% 1.00x ONLINE -įilesystem 1G-blocks Used Avail Capacity Mounted onįrom all the possible setups with 90 disks of 12 TB capacity I have chosen to go the RAID60 way – its ZFS equivalent of course. NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT The installation of course supports the ZFS Boot Environments bulletproof upgrades/changes feature. Its generally very ‘default’ installation with ZFS mirror on two SSD disks. I have chosen latest FreeBSD 12.0-RELEASE for the purpose of this installation. Some drive are attached via first Broadcom SAS3008 controller, the rest is attached via the second one, and they call them Enclosures instead od of controllers for some reason. The BIOS/UEFI interface shows two Enclosures but its two Broadcom SAS3008 controllers. On the screenshots the two SSD drives prepared for system. One is of course allowed to power on/off/cycle the box remotely.Īfter booting into the BIOS/UEFI setup its possible to select from which drives to boot from. I know its 2019 but HTML5 only Remote Control (remote console) without need for any third party plugins like Java/Silverlight/Flash/… is very welcomed. There is separate Settings menu for various setup options. We have System Inventory information with installed hardware. We have access to various Sensor information available with temperatures of system components. you can create several separate user accounts or can connect to external user services like LDAP/AD/RADIUS for example.Īfter logging in a simple Dashboard welcomes us. Its not bloated, well organized and works quite fast. The so called Lights Out management interface is really nice. One thing that you will need is a rack cabinet that is 1200 mm long to fit that monster □ Management Interface Price of the whole system is about $65 000 – drives included. 2 x 10-Core Intel Xeon Silver 4114 CPU 2.20GHzĩ0 x Toshiba HDD MN07ACA12TE 12 TB (Data) While both GlusterFS and Minio clusters were cone on virtual hardware (or even FreeBSD Jails containers) this one uses real physical hardware. I would use the first one – the TYAN FA100 for short name. Supermicro SuperStorage 6048R-E1CR90L (90 bays). ![]() There are 4U servers with 90-100 3.5″ drive slots which will allow you to pack 1260-1400 Terabytes of data (with 14 TB drives). UPDATE 2 – Real Life Pictures in Data Center.Here is the (non clickable) Table of Contents. How much storage space can you squeeze from a single 4U system? It turns out a lot! Definitely more then 1 PB (1024 TB) of raw storage space. Silent Fanless FreeBSD Server – Redundant Backup.GlusterFS Cluster on FreeBSD with Ansible and GNU Parallel.Distributed Object Storage with Minio on FreeBSD.I have build various storage related systems based on FreeBSD: Today I will show you how I have built so called Enterprise Storage based on FreeBSD system along with more then 1 PB (Petabyte) of raw capacity. How about using FreeBSD as an Enterprise Storage solution on real hardware? This where FreeBSD shines with all its storage features ZFS included. This is why I got something special today :). Today FreeBSD operating system turns 26 years old. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |