[mythtvnz] Recording dropouts and disk performance...

Rob Connolly rob at webworxshop.com
Sat Feb 16 21:19:14 GMT 2013


On Sun, Feb 17, 2013 at 01:17:52AM +1300, Steve Hodge wrote:
>    On Sat, Feb 16, 2013 at 10:16 PM, criggie <[1]criggie at criggie.org.nz>
>    wrote:
> 
>    On 16/02/13 21:48, Steve Hodge wrote:
> 
>    On Sat, Feb 16, 2013 at 8:36 PM, Robin Gilks <[2]g8ecj at gilks.org
> 
>        I do find it odd that a Raid 1 system is so slow - my Raid 5
>    system,
>        which
>        in theory is 1/4 the speed (at best!) handles 8 simultaneous
>        recording OK
>        with a mixture of DVB-T, DVB-S and analog off a set=top box.
>    RAID1 and RAID5 have theoretically identical write performance - the
>    same as the speed of the slowest disk. In practice it'll be slightly
>    slower just because there are more disks involved.
> 
>      Agreed - RAID5 should be N times faster than one disk where N is
>      number of disks in the raid.   A file is written in N blocks, with
>      1/N per disk (approximately) so it should be done in 1/N the time of
>      a single disk writing the lot.
> 
>    Right, at least in the general case. With MythTV, the files being
>    written are being streamed from a relatively slow source (the DVB
>    device). So what will happen is that the system will end up writing to
>    two drives in the RAID 5 array at a time (the drive that has the block
>    being written to and the parity drive for that stripe). Depending on
>    the implementation it may have to read from one of these drives or all
>    of the drives in the array first.The reads are generally pretty cheap
>    as plenty of look-ahead can be used so the performance ends up being
>    pretty similar to a writing to a single drive. If you have a file that
>    is buffered then performance can be higher as more drives can be
>    written to at once (since we have available data for multiple blocks).
>    With RAID1 both drives are written simultaneously so the speed is
>    identical to writing to a single drive.
> 
>      In practice this theoretical stuff goes out the window, because it
>      takes time for either software or hardware to calulate the parity
>      bits
> 
>    Note that the parity calculation itself is so trivial that you might as
>    well ignore it on modern hardware. The cost is reading in the other
>    blocks in the stripe, if you don't already have them buffered.
> 
>      Personally I'd suspect LVM, and I must ask, why are you using LVM at
>      all?
> 
>      LVM is awesome for the OS, or your files, or almost anything.
> 
>    There are cases where LVM is useful, but personally I think they're
>    quite rare. Back when drives were relatively small it was not uncommon
>    for large unix systems to have multiple partitions (or even drives)
>    just for the OS (e.g. /var was often a separate partition). It was very
>    common to have home directories separate. LVM is great for that sort of
>    set up. But these days that is almost never necessary - space is cheap
>    and plentiful and baroque partition layouts are just needless
>    complexity for no gain. LVM introduces a significant performance
>    penalty and you don't need it to add space to partitions if you are
>    using software RAID (at least if you pick the right RAID level).
>    LVM can be useful to combine arrays, but unless you are putting space
>    on frequently it's rarely worth the effort. E.g. I have a number of
>    160-320GB drives sitting around the I might be able to combine into a
>    600GB array with some redundancy. But my main array is 4.5TB over 4
>    1.5TB drives. It's not worth the power, heat, or space to put 3 or 4
>    extra drives into that system just for 600GB. Better to buy one more
>    1.5TB drive.
> 
>      Except recording storage in mythtv.
> 
>    Exactly. If you want to use multiple drives for recordings and a single
>    array won't do then storage groups are a better way to go.
>    Cheers,
>    Steve
> 

So the general consensus is that LVM == bad in this situation. The RAID
setup is also softraid through mdadm if that makes a difference. I think
the motherboard RAID controller only does fakeraid anyway.

I had been using LVM to separate partitions for different purposes, e.g.
root, home, recordings, other media, etc. and to allow expansion of
space for each as necessary. Now that I think about this, it's kinda
pointless as creating one partition on the whole disk would achieve the
same thing!

I do have to rebuild the system in the relatively near future, both
because I want to get off Arch Linux and because I intend to buy an SSD
for the system/database partition. I can eliminate the LVM then.

Of course, this doesn't tell me why this is just happening now, when the
system has been running fine for over a year in this configuration.

Cheers,

Rob




More information about the mythtvnz mailing list