Quantcast
Channel: LowEndTalk
Viewing all articles
Browse latest Browse all 39981

Can someone explain me this?

$
0
0

First of all, this isn't a important question. It is just something I noticed and I would love to learn about these kind of things.

I got 2 servers, one at Ipxcore @Damian and one at Ramnode @Nick_A (Can recommend them both!)

Doing this command:

dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync && rm -rf iotest

Gives me this at Ramnode:

root@server:~# dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync && rm -rf iotest
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 3.99496 s, 269 MB/s

Where ipxcore is giving me this:

root@mon:~# dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync && rm -rf iotest
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 10.4859 s, 102 MB/s

That is obvious, Ramnode is cached and ipxcore isn't.

Lets take this command:

dd if=/dev/zero of=sb-io-test bs=64k count=16k oflag=dsync
From my understand the blocks will be 64K (Instead of 1M) and 16 times as many. Assuming ssd's are much faster with writing I was expecting Ramnode to be faster. But they wheren't:

Ramnode:

root@server:~# dd if=/dev/zero of=sb-io-test bs=64k count=16k oflag=dsync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 13.6672 s, 78.6 MB/s

Ipxcore:

root@mon:~# dd if=/dev/zero of=sb-io-test bs=64k count=16k oflag=dsync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 10.0707 s, 107 MB/s

Now my question, how and why?

I'm really sorry if this is a noob question, just wondering why


Viewing all articles
Browse latest Browse all 39981

Trending Articles