- Page 1 and 2: conference proceedings ISBN 978-1-9
- Page 4 and 5: USENIX Association Proceedings of t
- Page 8: Thursday, February 25, 2016 Elimina
- Page 12 and 13: Optimizing Every Operation in a Wri
- Page 14 and 15: all tree updates and can be replaye
- Page 16 and 17: second pass replays the logical ent
- Page 18 and 19: Whenever a data or metadata entry i
- Page 20 and 21: 160 Single File Rename (warm cache)
- Page 22 and 23: 40 rsync 1000 IMAP MB / s 30 20 10
- Page 24 and 25: References [1] ANDERSEN, D. G., FRA
- Page 26 and 27: The Composite-file File System: Dec
- Page 28 and 29: system. Since moving a subfile in a
- Page 30 and 31: The original intent of our work is
- Page 32 and 33: References [ADB05] Abd-El-Malek M,
- Page 34 and 35: Isotope: Transactional Isolation fo
- Page 36 and 37: outinely implement indirection in F
- Page 38 and 39: A55 beginTX(); Read(1); Write(0); e
- Page 40 and 41: disk, at the same level of the stac
- Page 42 and 43: 5.2 Transactional Filesystem IsoFS
- Page 44 and 45: Goodput (K Ops/sec) Goodput (K Ops/
- Page 46 and 47: References [1] fcntl man page. [2]
- Page 48: [53] M. Saxena, M. M. Swift, and Y.
- Page 51 and 52: 4250925 2125463 0 130 2 1 0 130 125
- Page 53 and 54: The following access methods are fa
- Page 55 and 56: Request stage Insertion requests HT
- Page 57 and 58:
3. Clients must be able to read bac
- Page 59 and 60:
land detection and reverse power fl
- Page 61 and 62:
(a) Insert latencies (b) Cold query
- Page 63 and 64:
[27] SCOTT, JIM. MapR-DB OpenTSDB b
- Page 65 and 66:
Understanding these tradeoffs and i
- Page 67 and 68:
is in line with those reported by h
- Page 69 and 70:
DC Tag Cooling AFR Increase wrt AFR
- Page 71 and 72:
Figure 3: Two-year comparison betwe
- Page 73 and 74:
Figure 6: Normalized daily failures
- Page 75 and 76:
Figure 8: The server blade designs.
- Page 78 and 79:
Flash Reliability in Production: Th
- Page 80 and 81:
Model name MLC-A MLC-B MLC-C MLC-D
- Page 82 and 83:
Median RBER 0e+00 4e−08 8e−08 M
- Page 84 and 85:
Median RBER 0e+00 6e−08 Cluster 1
- Page 86 and 87:
Daily UE Probability 0.000 0.006 0.
- Page 88 and 89:
Model name MLC-A MLC-B MLC-C MLC-D
- Page 90 and 91:
sides differences in burn-in testin
- Page 92 and 93:
Opening the Chrysalis: On the Real
- Page 94 and 95:
allow a two-parity erasure code (n
- Page 96 and 97:
parity B(D k ) we can obtain B(A),
- Page 98 and 99:
in client-datanode data stream. To
- Page 100 and 101:
surviving servers repair the lost d
- Page 102 and 103:
(a) Overall Network Traffic in GB (
- Page 104 and 105:
References [1] Ceph Erasure. http:/
- Page 106 and 107:
The Devil is in the Details: Implem
- Page 108 and 109:
Figure 1: Normal programming order
- Page 110 and 111:
The benefits from such a scheme are
- Page 112 and 113:
Used Write L 1 H Clean Write L 1 [P
- Page 114 and 115:
on the device. The only comparable
- Page 116 and 117:
Normalized Erasures 1.1 1.05 1 0.95
- Page 118 and 119:
References [1] https://github.com/z
- Page 120:
[54] G. Yadgar, A. Yucovich, H. Aro
- Page 123 and 124:
storage devices. As a result, to se
- Page 125 and 126:
m 3 and m 4 are in the erased state
- Page 127 and 128:
3.2 Hybrid ECC and Data Structure T
- Page 129 and 130:
34MB) of files in an ext4 partition
- Page 131 and 132:
e intuitively justified. When both
- Page 133 and 134:
algorithm [30], and designed the LZ
- Page 135 and 136:
[27] T. Tso, “Debugfs.” [Online
- Page 137 and 138:
from the host are performed on eith
- Page 139 and 140:
(c) Read-Dominate Distribution of R
- Page 141 and 142:
Normalized 0.2 0 Traditional Li et
- Page 143 and 144:
[19] D. Park and D. H. Du, “Hot d
- Page 145 and 146:
during lookups. WiscKey retains the
- Page 147 and 148:
Throughput (MB/s) 600 500 400 300 2
- Page 149 and 150:
oth point queries and range queries
- Page 151 and 152:
3.4.2 Optimizing the LSM-tree Log A
- Page 153 and 154:
Throughput (MB/s) 300 250 200 150 1
- Page 155 and 156:
1000 LevelDB RocksDB WhisKey-GC Whi
- Page 157 and 158:
References [1] Apache HBase. http:/
- Page 159 and 160:
Crash-Consistent Applications. In P
- Page 161 and 162:
2.1 Multi-Stage Log-Structured Desi
- Page 163 and 164:
3.1 Roles of Redundancy Redundancy
- Page 165 and 166:
1 // @param L maximum level 2 // @p
- Page 167 and 168:
Write amplification 60 50 40 30 20
- Page 169 and 170:
Write amplification 60 50 40 30 20
- Page 171 and 172:
Write amplification 50 40 30 20 10
- Page 173 and 174:
[18] M. Ghosh, I. Gupta, S. Gupta,
- Page 175 and 176:
A Proofs This section provides proo
- Page 177 and 178:
1 // @param wal write-ahead log fil
- Page 179 and 180:
The first challenge is that the sca
- Page 181 and 182:
get operation Client set operation
- Page 183 and 184:
VT 2 , ..., VT F . If VT 1 = VT 2 =
- Page 185 and 186:
may suffer from unnecessary resourc
- Page 187 and 188:
Memory (GB) 350 300 250 200 150 100
- Page 189 and 190:
Throughput (10 3 ops/s) PBR Cocytus
- Page 191 and 192:
[29] K. Rashmi, N. B. Shah, D. Gu,
- Page 193 and 194:
image data as necessary, drasticall
- Page 195 and 196:
Table 2: HelloBench Workloads. Hell
- Page 197 and 198:
Percent of Images 100% 80% 60% 40%
- Page 199 and 200:
Figure 15: Slacker Architecture. Mo
- Page 201 and 202:
Figure 18: Loopback Bitmaps. Contai
- Page 203 and 204:
Percent of Images 100% 80% 60% 40%
- Page 205 and 206:
References [1] Tintri VMstore(tm) T
- Page 208 and 209:
sRoute: Treating the Storage Stack
- Page 210 and 211:
for a dynamic mechanism that change
- Page 212 and 213:
long as it reaches the initiating s
- Page 214 and 215:
p X r Y p X r Y p X (a) (b) (c) Fig
- Page 216 and 217:
Request rate (reqs/s) 100000 10000
- Page 218 and 219:
Throughput (IO/s) 700 600 Read requ
- Page 220 and 221:
6 Open questions Our initial invest
- Page 222 and 223:
An API for application control of S
- Page 224 and 225:
Flamingo: Enabling Evolvable HDD-ba
- Page 226 and 227:
Input Rack description Resource con
- Page 228 and 229:
4 can spin up concurrently. Flaming
- Page 230 and 231:
a set of performance-related charac
- Page 232 and 233:
1600 1200 800 400 0 Number of confi
- Page 234 and 235:
Time to first byte (sec) 120 100 80
- Page 236 and 237:
References [1] ALVAREZ, G. A., BORO
- Page 238 and 239:
PCAP: Performance-Aware Power Cappi
- Page 240 and 241:
eing visited. In contrast, in this
- Page 242 and 243:
Observation 1: HDDs exhibit a dynam
- Page 244 and 245:
12 10 (a) PCAP base De lay (b) Unbo
- Page 246 and 247:
(a) Power capping by throttling (b)
- Page 248 and 249:
(a) Throughput vs. time (b) Thro
- Page 250 and 251:
References [1] Amazon S3. http://aw
- Page 252 and 253:
Mitigating Sync Amplification for C
- Page 254 and 255:
a new image based on the base image
- Page 256 and 257:
We conducted the experiments on a m
- Page 258:
REFERENCES [1] CHIDAMBARAM, V., PIL
- Page 261 and 262:
the specification and the executabl
- Page 263 and 264:
Modeled Client ClientReq Ack Notify
- Page 265 and 266:
wrapping the target vNext component
- Page 267 and 268:
ExtentNode machines and launches a
- Page 269 and 270:
Fabric services. The model was writ
- Page 271 and 272:
merging process would preserve this
- Page 273 and 274:
[31] LEESATAPORNWONGSA, T.,HAO, M.,
- Page 275 and 276:
In the following segment, we briefl
- Page 277 and 278:
Label Definition Measured metrics:
- Page 279 and 280:
1 (a) CDF of Slowdown Interval (b)
- Page 281 and 282:
(a) CDF of Slowdown vs. Drive Age (
- Page 283 and 284:
(≥2x) disks and SSDs are unplugge
- Page 285 and 286:
Our current SSD dataset is two orde
- Page 287 and 288:
Latencies with Chopper. In Proceedi
- Page 289 and 290:
the technology. However, this probl
- Page 291 and 292:
For example, the DFH of a dataset c
- Page 293 and 294:
slackness and present a “likely r
- Page 295 and 296:
that accuracy is achieved at a samp
- Page 297 and 298:
Name Description Size Deduplication
- Page 299 and 300:
Figure 11: Execution of the full re
- Page 302 and 303:
OrderMergeDedup: Efficient, Failure
- Page 304 and 305:
Figure 1: Failure-consistent dedupl
- Page 306 and 307:
Advanced Packaging Tool (APT) is a
- Page 308 and 309:
loads from 5× to 12×. In comparis
- Page 310:
Dmdedup: Device mapper target for d
- Page 313 and 314:
ploits the scan-resistant ability o
- Page 315 and 316:
3.2 Operations Based on the archite
- Page 317 and 318:
Figure 2: Architecture of D-ARC D.
- Page 319 and 320:
Read Miss Ratio (%) 100 90 80 70 60
- Page 321 and 322:
Total Miss Ratio (%) 7.2 7 6.8 6.6
- Page 323 and 324:
Throughput (MB/s) 95 LRU D-LRU D-AR
- Page 325 and 326:
ack for client-side flash caches. I
- Page 327 and 328:
Context recovery can be achieved ei
- Page 329 and 330:
can use these flags to pass hints t
- Page 331 and 332:
Throughput (Kops/s) 12.5 10 7.5 5 2
- Page 333 and 334:
Environments (RESoLVE’12), London
- Page 335 and 336:
To overcome all these limitations,
- Page 337 and 338:
RAMCloud [44] is a DRAM-based stora
- Page 339 and 340:
(VFS) layer locks all the affected
- Page 341 and 342:
4.5. Atomic mmap DAX file systems a
- Page 343 and 344:
NVMM Read latency Write bandwidth c
- Page 345 and 346:
Duration 10s 30s 120s 600s 3600s NI
- Page 347 and 348:
for Persistent Memory. In Proceedin
- Page 349 and 350:
Memory Storage System. In Proceedin
- Page 351 and 352:
face which does not support overwri
- Page 353 and 354:
Figure 3: Check-point segment handl
- Page 355 and 356:
Figure 5: Writes and garbage collec
- Page 357 and 358:
Capacity Block-level Hybrid Page-le
- Page 359 and 360:
Transactions per Second Transaction
- Page 361 and 362:
Percentage (%) 1 0.8 0.6 0.4 0.2 0
- Page 363 and 364:
References [1] RethinkDB. http://re
- Page 366 and 367:
CloudCache: On-demand Flash Cache M
- Page 368 and 369:
Trace Time (days) Total IO (GB) WSS
- Page 370 and 371:
Hit Ratio (%) Allocation (GB) 90 75
- Page 372 and 373:
Hit Ratio (%) 66 63 60 57 54 51 48
- Page 374 and 375:
VM migration to balance the load on
- Page 376 and 377:
Latency (msec) 2.5 2.0 1.5 1.0 0.5
- Page 378 and 379:
formance for the migrated VM. It al
- Page 380:
[23] N. Megiddo and D. S. Modha. AR