## Windows – [x] Windows 10 install on moneyboxwin – [x] office 365, TreeSizeFree, KarenReplicator – [x] Copy 5GB backup drive
## Disk drive – […] Initialize RAID48GB – [ ] Copy recordings to 48GB – [ ] Copy personal files to 48GB – [ ] Build 60 GB Linux partition, put in recordings – [x] Build 48 GB backup drive (RAID5), put in all recordings
action: walked for an hour to get to work. Will do the same on the way back. That’s two hours of walking per day. I will also save $20 a day not taking ferry. I will lose 1KG, save $20, and spend extra 1 hour a day commuting to work. and back. In a month I will be 30 KG lighter and $600 richer.
Start: 10:30 AM Goals: memory loop plot, data backup, v4.9.5 debug with bapun,
# Memory test status Still going. the param_set2.prm include cached results so I need to consider file read time, which should be about 1GB/s.
# Bapun v4.9.5 vs v4.9.11 comparison No obvious difference. Formatting issue is suspected. Run his dataset tomorrow using his own parameter. Also run using makeprm command
# Dan English Dataset library Not downloaded yet. I should do this after making parforeval command
# Disk backup New 4-bay disk is setup. Each can hold 40TB (RAID0). My personal data will be RAID5 (30TB) and the recordings will be RAID0 and will be stored in CEPH. Linux gets 20TB of scratch (RAID0) to be managed by LVM. Eventually I will invest in 14TBx4 which gives 12TB extra with RAID5. This will come at $1600 price tag. I can only afford an enclosure (30TB) at this point. I will setup 10TB scrach drive in linux and 10TB in windows. I will keep a copy of the data at home to be used with my Lenovo.
## Final goal – 40TB Hitachi RAID5 keep at home (enclosure ordered, fill with personal data) – 40TB WD RAID0 keep @ work (filled with recordings, hardware RAID) – 40TB WD RAID0 windows enclosure (temp data keeping purpose) – 20TB Hitach keep in Moneybox (10TB for windows, 10TB for ubuntu)
## Data migration plan – day0: empty 80TB WD RAID5 tower to 40TB RAID0 Hitachi (recordings) and 20TB Hitachi drives (personal) – day1: copy RAID0 Hitachi (recordings) to CEPH via Globus (over the weekend, ~30TB) – day2: bring 4-bay enclosure from home (Monday), add 40TB WD, copy from 40TB Hitachi RAID0 overnight – day2: Change 40TB Hitachi to RAID5 and copy personal data (20TB Hitachi) overnight – day3: Take 40TB Hitach RAID5 home, borrow 40TB WD to setup linux RAID
# Dell precision linux setup (T7910) – Dylan suggested RAID5 XFS on the CENTOS workstation. RAID is managed by LVM (logical volume manager). – The plan is generate the dataset and run the benchmark on this workstation. When my own linux box is cleared then I could clone the setup. – Dylan disabled the last access time field, a performance hack. – Currently generating the test dataset. The RAID is being built but the writing to disk is still possible. He recommended softraid. – The question about why not using CEPH. read/write variability on the workstation. this is the reason why I want a local RAID. I am dealing with a large file read and write and I want to keep all the processing to local.
# Linux thunderbolt install sudo apt install bolt restart and mount Dell workstation has Thunderbolt port available.
# Memory profiling caveat In Ubuntu, the memory profiling precision above 20MB (MB is 10^6 whereas MiB is 2^20)