[FUG-BR] Raid 5 no FreeBSD

Eduardo Schoedler listas em esds.com.br
Segunda Outubro 17 15:49:38 BRST 2011


Acho que a discussão está fugindo um pouco do assunto...
Que tal criar outra thread ?


Em 17 de outubro de 2011 15:46, Otavio Augusto <otavioti em gmail.com>escreveu:

> Se vc quiser um BSD  usaunbdo Xen como dom0 tente o NetBSD.
>
>
> Em 17 de outubro de 2011 15:39, Marcelo Gondim <gondim em bsdinfo.com.br>
> escreveu:
> > Em 17/10/2011 11:55, Marcelo Gondim escreveu:
> >> Em 17/10/2011 10:31, Luiz Gustavo Costa escreveu:
> >>> Bom dia,
> >>>
> >>> João, o post do cara lá tá totalmente errado, não existe port do KVM (o
> >>> mesmo usado em Linux) para o FreeBSD, ele cita uma instalação do Qemu
> com
> >>> o modulo de kernel para aceleração Kqemu, que não tem nada haver com o
> KVM
> >>>
> >>> O VirtualBox é beeeem mais rapido que o qemu+kqemu
> >> Luiz no virtualbox do FreeBSD não dá pra instalar esse aqui não né?
> >> Oracle_VM_VirtualBox_Extension_Pack-4.1.4-74291.vbox-extpack
> >>
> >> Tentei fazer o install dele mas deu esse erro:
> >>
> >> (root em zeus)[/storage/data]# VBoxManage extpack install
> >> Oracle_VM_VirtualBox_Extension_Pack-4.1.4-74291.vbox-extpack
> >> 0%...
> >> Progress state: NS_ERROR_FAILURE
> >> VBoxManage: error: Failed to install
> >>
> "/storage/data/Oracle_VM_VirtualBox_Extension_Pack-4.1.4-74291.vbox-extpack":
> >> Failed to locate the main module ('VBoxPuelMain')
> >
> > Acabei de ver que não é suportado no FreeBSD.   :D  Do resto está
> > funcionando aqui certinho. Estou até instalando uma VM aqui com Windows
> > XP para teste.
> > Valeu pela ajuda pessoal.
> >
> >>
> >>> via /usr/ports/UPDATING:
> >>>
> >>> 20091206:
> >>>      AFFECTS: users of emulators/qemu
> >>>      AUTHOR: nox em FreeBSD.org
> >>>
> >>>      The port has been updated to 0.11.1, which no longer enables kqemu
> by
> >>>      default (if built with KQEMU knob on), now you also have to
> explicitly
> >>>      pass -enable-kqemu (or -kernel-kqemu as with the previous
> versions)
> >>>      if you want to use it.  Also note the 0.11 stable branch is the
> last
> >>>      qemu branch that still supports kqemu, so if you depend on
> reasonably
> >>>      fast emulation on FreeBSD you should start looking for
> alternatives
> >>>      some time soon.  (VirtualBox?)
> >>>
> >>> KVM = http://wiki.qemu.org/KVM
> >>> KQEMU = http://wiki.qemu.org/KQEMU
> >>>
> >>>
> >>> Em Mon, 17 Oct 2011 09:17:41 -0200, João Mancy<joaocep em gmail.com>
> >>> escreveu:
> >>>
> >>>> bom dia,
> >>>>
> >>>> Se não me engano tem um port do KVM
> >>>>
> >>>> http://www.linux-kvm.org/page/BSD
> >>>>
> >>>> http://www.sufixo.com/raw/2009/06/08/kvm-on-freebsd-72/
> >>>>
> >>>> Fica a dica, e um abraço.
> >>>>
> >>>>
> >>>>
> >>>> Em 16 de outubro de 2011 11:49, Marcelo Gondim
> >>>> <gondim em bsdinfo.com.br>escreveu:
> >>>>
> >>>>> Em 16/10/2011 01:38, Josias L.G escreveu:
> >>>>>> http://wiki.freebsd.org/BHyVe
> >>>>>>
> >>>>>> Em breve algo melhor do que Xen.
> >>>>> Cool !!! Não sabia desse projeto. Vou ficar de olho nele, porque é
> uma
> >>>>> área que o FreeBSD ainda não se destacou. O que mais vejo são
> servidores
> >>>>> Debian usando Xen rodando como dom0 e aí sim outros sistemas como
> domU
> >>>>> rodando com excelente performance.
> >>>>>
> >>>>> Se o BHyVe ficar melhor com certeza novos horizontes se abrirão.  :)
> >>>>>> Abraços.
> >>>>>>
> >>>>>> Em 15/10/2011, às 23:41, Marcelo Gondim escreveu:
> >>>>>>
> >>>>>>> Em 15/10/2011 23:03, Thiago Damas escreveu:
> >>>>>>>>      Que tipo de aplicacao usara estes discos? Ja pensou em fazer
> um
> >>>>> RAID
> >>>>>>>> 10 com zfs?
> >>>>>>> Opa Thiago,
> >>>>>>>
> >>>>>>> No momento para hospedagem de algumas aplicações mas pensei em
> criar
> >>>>> um
> >>>>>>> servidor de VMs. No entanto parece que para isso o que temos de
> >>>>> melhor
> >>>>>>> seria o Virtualbox pelo jeito.
> >>>>>>> Pena que ainda não temos grandes opções para essa tarefa ou estou
> >>>>> enganado?
> >>>>>>> Estive procurando por Xen no FreeBSD e pelo visto só trabalha como
> >>>>> domU
> >>>>>>> e não como dom0.
> >>>>>>>
> >>>>>>>> Thiago
> >>>>>>>>
> >>>>>>>> Em 15 de outubro de 2011 21:32, Marcelo
> >>>>> Gondim<gondim em bsdinfo.com.br>
> >>>>>     escreveu:
> >>>>>>>>> Em 15/10/2011 16:54, Luiz Gustavo Costa escreveu:
> >>>>>>>>>> Buenas Marcelo !!!
> >>>>>>>>>>
> >>>>>>>>>> Rapaz, vinum... usei muito na familia 4.x do freebsd, era muito
> >>>>> bom !
> >>>>> mas
> >>>>>>>>>> quando migra-mos para o 5.x ele não foi portado, até foi criado
> o
> >>>>> gvinum,
> >>>>>>>>>> mas até algum tempo atras não estava estavel (falo isso mas eu
> não
> >>>>> sei
> >>>>>>>>>> como esta o estado dele hoje).
> >>>>>>>>>>
> >>>>>>>>>> Eu faria o raid5 (raidZ) no zfs, mas existe uma outra opção
> >>>>> chamada
> >>>>> graid5
> >>>>>>>>>> no ports, sinceramente nunca usei, mas pode-se testar:
> >>>>>>>>> Pronto agora em raidz  ;)
> >>>>>>>>>
> >>>>>>>>> (root em zeus)[~]# zpool status storage
> >>>>>>>>>      pool: storage
> >>>>>>>>>     state: ONLINE
> >>>>>>>>>     scan: none requested
> >>>>>>>>> config:
> >>>>>>>>>
> >>>>>>>>>            NAME        STATE     READ WRITE CKSUM
> >>>>>>>>>            storage     ONLINE       0     0     0
> >>>>>>>>>              raidz1-0  ONLINE       0     0     0
> >>>>>>>>>                ad12    ONLINE       0     0     0
> >>>>>>>>>                ad14    ONLINE       0     0     0
> >>>>>>>>>                ad16    ONLINE       0     0     0
> >>>>>>>>>                ad18    ONLINE       0     0     0
> >>>>>>>>>
> >>>>>>>>> errors: No known data errors
> >>>>>>>>>
> >>>>>>>>> As vezes é tão fácil fazer as coisas no FreeBSD que a gente até
> >>>>> duvida
> >>>>>>>>> que realmente funcione. ahahahahaah
> >>>>>>>>> Muito bom mesmo!!!
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>> [root em desktop] /usr/ports/sysutils/graid5# cat pkg-descr
> >>>>>>>>>> FreeBSD GEOM class for RAID5.
> >>>>>>>>>>
> >>>>>>>>>> This is RAID5 geom class, originally written by Arne Worner
> >>>>>>>>>> <arne_woerner em yahoo.com>
> >>>>>>>>>>
> >>>>>>>>>> WWW: http://lev.serebryakov.spb.ru/download/graid5/
> >>>>>>>>>>
> >>>>>>>>>> Abraços
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> Em Sat, 15 Oct 2011 14:50:45 -0300, Marcelo Gondim<
> >>>>> gondim em bsdinfo.com.br>
> >>>>>>>>>> escreveu:
> >>>>>>>>>>
> >>>>>>>>>>> Olá Pesssoal,
> >>>>>>>>>>>
> >>>>>>>>>>> Estou aqui com 4 discos SATA II em uma máquina e resolvi fazer
> um
> >>>>> raid 5
> >>>>>>>>>>> com eles. Usei o gvinum:
> >>>>>>>>>>>
> >>>>>>>>>>> drive r0 device /dev/ad12a
> >>>>>>>>>>> drive r1 device /dev/ad14a
> >>>>>>>>>>> drive r2 device /dev/ad16a
> >>>>>>>>>>> drive r3 device /dev/ad18a
> >>>>>>>>>>> volume raid5
> >>>>>>>>>>>          plex org raid5 512k
> >>>>>>>>>>>          sd drive r0
> >>>>>>>>>>>          sd drive r1
> >>>>>>>>>>>          sd drive r2
> >>>>>>>>>>>          sd drive r3
> >>>>>>>>>>>
> >>>>>>>>>>> Parece que foi tudo 100%. É realmente o gvinum usado para fazer
> o
> >>>>> raid 5
> >>>>>>>>>>> ou existe alguma outra forma melhor no FreeBSD?
> >>>>>>>>>>> Uma outra coisa que aparece no boot é essa mensagem:
> >>>>>>>>>>>
> >>>>>>>>>>> Oct 15 10:36:12 zeus kernel: GEOM_VINUM: raid5 plex request
> >>>>> failed.
> >>>>>>>>>>> gvinum/raid5[READ(offset=1500321938944, length=512)]
> >>>>>>>>>>>
> >>>>>>>>>>> Mas parece que está tudo funcionando:
> >>>>>>>>>>>
> >>>>>>>>>>> gvinum ->       printconfig
> >>>>>>>>>>> # Vinum configuration of zeus.linuxinfo.com.br, saved at Sat
> Oct
> >>>>> 15
> >>>>>>>>>>> 14:48:59 2011
> >>>>>>>>>>> drive r0 device /dev/ad12a
> >>>>>>>>>>> drive r1 device /dev/ad14a
> >>>>>>>>>>> drive r2 device /dev/ad16a
> >>>>>>>>>>> drive r3 device /dev/ad18a
> >>>>>>>>>>> volume raid5
> >>>>>>>>>>> plex name raid5.p0 org raid5 1024s vol raid5
> >>>>>>>>>>> sd name raid5.p0.s0 drive r0 len 976772096s driveoffset 265s
> plex
> >>>>>>>>>>> raid5.p0 plexoffset 0s
> >>>>>>>>>>> sd name raid5.p0.s1 drive r1 len 976772096s driveoffset 265s
> plex
> >>>>>>>>>>> raid5.p0 plexoffset 1024s
> >>>>>>>>>>> sd name raid5.p0.s2 drive r2 len 976772096s driveoffset 265s
> plex
> >>>>>>>>>>> raid5.p0 plexoffset 2048s
> >>>>>>>>>>> sd name raid5.p0.s3 drive r3 len 976772096s driveoffset 265s
> plex
> >>>>>>>>>>> raid5.p0 plexoffset 3072s
> >>>>>>>>>>>
> >>>>>>>>>>> gvinum ->       l
> >>>>>>>>>>> 4 drives:
> >>>>>>>>>>> D r0                    State: up       /dev/ad12a      A:
> >>>>> 0/476939
> >>>>> MB
> >>>>>>>>>>> (0%)
> >>>>>>>>>>> D r1                    State: up       /dev/ad14a      A:
> >>>>> 0/476939
> >>>>> MB
> >>>>>>>>>>> (0%)
> >>>>>>>>>>> D r2                    State: up       /dev/ad16a      A:
> >>>>> 0/476939
> >>>>> MB
> >>>>>>>>>>> (0%)
> >>>>>>>>>>> D r3                    State: up       /dev/ad18a      A:
> >>>>> 0/476939
> >>>>> MB
> >>>>>>>>>>> (0%)
> >>>>>>>>>>>
> >>>>>>>>>>> 1 volume:
> >>>>>>>>>>> V raid5                 State: up       Plexes:       1 Size:
> >>>>> 1397
> >>>>>>>>>>> GB
> >>>>>>>>>>>
> >>>>>>>>>>> 1 plex:
> >>>>>>>>>>> P raid5.p0           R5 State: up       Subdisks:     4 Size:
> >>>>> 1397
> >>>>>>>>>>> GB
> >>>>>>>>>>>
> >>>>>>>>>>> 4 subdisks:
> >>>>>>>>>>> S raid5.p0.s0           State: up       D: r0           Size:
> >>>>>    465
> >>>>>>>>>>> GB
> >>>>>>>>>>> S raid5.p0.s1           State: up       D: r1           Size:
> >>>>>    465
> >>>>>>>>>>> GB
> >>>>>>>>>>> S raid5.p0.s2           State: up       D: r2           Size:
> >>>>>    465
> >>>>>>>>>>> GB
> >>>>>>>>>>> S raid5.p0.s3           State: up       D: r3           Size:
> >>>>>    465
> >>>>>
>


Mais detalhes sobre a lista de discussão freebsd