Translation Information

Project website debian-handbook.info
Mailing list for translators debian-handbook-translators@lists.alioth.debian.org
Instructions for translators

https://debian-handbook.info/contribute/

Project maintainers User avatar rhertzog User avatar pere
Translation process
  • Translations can be made directly.
  • Translation suggestions can be made.
  • Any authenticated user can contribute.
  • The translation uses bilingual files.
Translation license GNU General Public License v2.0 or later
Source code repository https://salsa.debian.org/hertzog/debian-handbook.git
Repository branch buster/master
Last remote commit fr-FR: Translated using Weblate. 109bba20
User avatar pere authored 12 days ago
Weblate repository https://hosted.weblate.org/git/debian-handbook/12_advanced-administration/
Filemask */12_advanced-administration.po
Translation file sv-SE/12_advanced-administration.po
New strings to translate 4 months ago
Resource update 4 months ago
User avatar None

Source string changed

Debian Handbook / 12_advanced-administrationSwedish

<computeroutput># </computeroutput><userinput>xen-create-image --hostname testxen --dhcp --dir /srv/testxen --size=2G --dist=jessiebuster --role=udev</userinput>
<computeroutput>
[...]
General Information
--------------------
Hostname : testxen
Distribution : jessiebuster
Mirror : http://ftpdeb.debian.org/debian/
Partitions : swap 5128MbM (swap)
/ 2G (ext34)
Image type : sparse
Memory size : 128Mb256M
Kernel path : /boot/vmlinuz-34.169.0-45-amd64
Initrd path : /boot/initrd.img-34.169.0-45-amd64
[...]
Logfile produced at:
/var/log/xen-tools/testxen.log

Installation Summary
---------------------
Hostname : testxen
Distribution : jessiebuster
MAC Address : 00:16:3E:8E:67:5C0C:74:2F
IP- Address(es) : dynamic
RSASSH Fingerprint : 0a:6e:71:98:95:46:64:ec:80:37:63:18:73:04:dd:2b
Root Password : adaX2jyRHNuWm8BDJS7PcEJ
SHA256:PuAGX4/4S07Xzh1u0Cl2tL04EL5udf9ajvvbufBrfvU (DSA)
SSH Fingerprint : SHA256:ajFTX54eakzolyzmZku/ihq/BK6KYsz5MewJ98BM5co (ECDSA)
SSH Fingerprint : SHA256:/sFov86b+rD/bRSJoHKbiMqzGFiwgZulEwpzsiw6aSc (ED25519)
SSH Fingerprint : SHA256:/NJg/CcoVj+OLE/cL3yyJINStnla7YkHKe3/xEdVGqc (RSA)
Root Password : EwmQMHtywY9zsRBpqQuxZTb

</computeroutput>
4 months ago
<computeroutput># </computeroutput><userinput>mdadm --misc --detail --brief /dev/md?</userinput>
<computeroutput>ARRAY /dev/md0 metadata=1.2 name=mirwiz:0 UUID=146e104f:66ccc06d:71c262d7:9af1fbc7
ARRAY /dev/md1 metadata=1.2 name=mirwiz:1 UUID=7d123734:9677b7d6:72194f7d:9050771c</computeroutput>
<computeroutput># </computeroutput><userinput>mdadm --misc --detail --brief /dev/md?</userinput>
<computeroutput>ARRAY /dev/md0 metadata=1.2 name=mirwiz:0 UUID=bb085b35:28e821bd:20d697c9:650152bb146e104f:66ccc06d:71c262d7:9af1fbc7
ARRAY /dev/md1 metadata=1.2 name=mirwiz:1 UUID=6ec558ca:0c2c04a0:19bca283:95f674647d123734:9677b7d6:72194f7d:9050771c</computeroutput>
4 months ago
# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
DEVICE /dev/sd*

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST &lt;system&gt;

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 name=mirwiz:0 UUID=146e104f:66ccc06d:71c262d7:9af1fbc7
ARRAY /dev/md1 metadata=1.2 name=mirwiz:1 UUID=7d123734:9677b7d6:72194f7d:9050771c

# This configuration was auto-generated on Tue, 25 Jun 2019 07:54:35 -0400 by mkconf
# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
DEVICE /dev/sd*

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST &lt;system&gt;

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 name=mirwiz:0 UUID=bb085b35:28e821bd:20d697c9:650152bb146e104f:66ccc06d:71c262d7:9af1fbc7
ARRAY /dev/md1 metadata=1.2 name=mirwiz:1 UUID=6ec558ca:0c2c04a0:19bca283:95f674647d123734:9677b7d6:72194f7d:9050771c

# This configuration was auto-generated on Thu, 17ue, 25 Jaun 2013 16:21:01 +0100
#
9 07:54:35 -0400 by mkconf 3.2.5-3
4 months ago
<computeroutput># </computeroutput><userinput>mdadm /dev/md1 --remove /dev/sde</userinput>
<computeroutput>mdadm: hot removed /dev/sde from /dev/md1
# </computeroutput><userinput>mdadm --detail /dev/md1</userinput>
<computeroutput>/dev/md1:
[...]
Number Major Minor RaidDevice State
2 8 96 0 active sync /dev/sdd2
1 8 80 1 active sync /dev/sdf</computeroutput>
<computeroutput># </computeroutput><userinput>mdadm /dev/md1 --remove /dev/sde</userinput>
<computeroutput>mdadm: hot removed /dev/sde from /dev/md1
# </computeroutput><userinput>mdadm --detail /dev/md1</userinput>
<computeroutput>/dev/md1:
[...]
Number Major Minor RaidDevice State
02 8 5096 0 active sync /dev/sdd2
21 8 80 1 active sync /dev/sdf</computeroutput>
4 months ago
<computeroutput># </computeroutput><userinput>mdadm /dev/md1 --add /dev/sdf</userinput>
<computeroutput>mdadm: added /dev/sdf
# </computeroutput><userinput>mdadm --detail /dev/md1</userinput>
<computeroutput>/dev/md1:
[...]
Raid Devices : 2
Total Devices : 3
Persistence : Superblock is persistent

Update Time : Tue Jun 25 11:09:42 2019
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 1
Spare Devices : 1

Consistency Policy : resync

Rebuild Status : 27% complete

Name : mirwiz:1 (local to host debian)
UUID : 7d123734:9677b7d6:72194f7d:9050771c
Events : 26

Number Major Minor RaidDevice State
2 8 96 0 spare rebuilding /dev/sdf
1 8 80 1 active sync /dev/sdd2

0 8 64 - faulty /dev/sde
# </computeroutput><userinput>[...]</userinput>
<computeroutput>[...]
# </computeroutput><userinput>mdadm --detail /dev/md1</userinput>
<computeroutput>/dev/md1:
[...]
Update Time : Tue Jun 25 11:10:47 2019
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 1
Spare Devices : 0

Consistency Policy : resync

Name : mirwiz:1 (local to host debian)
UUID : 7d123734:9677b7d6:72194f7d:9050771c
Events : 39

Number Major Minor RaidDevice State
2 8 96 0 active sync /dev/sdd2
1 8 80 1 active sync /dev/sdf

0 8 64 - faulty /dev/sde</computeroutput>
<computeroutput># </computeroutput><userinput>mdadm /dev/md1 --add /dev/sdf</userinput>
<computeroutput>mdadm: added /dev/sdf
# </computeroutput><userinput>mdadm --detail /dev/md1</userinput>
<computeroutput>/dev/md1:
[...]
Raid Devices : 2
Total Devices : 3
Persistence : Superblock is persistent

Update Time : Wed May 6 09:48:49 2015
Tue Jun 25 11:09:42 2019
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 1
Spare Devices : 1

Consistency Policy : resync

Rebuild Status : 287% complete

Name : mirwiz:1 (local to host mirwiz)
debian)
UUID : 6ec558ca:0c2c04a0:19bca283:95f67464
7d123734:9677b7d6:72194f7d:9050771c
Events : 26

Number Major Minor RaidDevice State
02 8 5096 0 active syncspare rebuilding /dev/sdd2f
21 8 80 1 spare rebuildingactive sync /dev/sdfd2

10 8 64 - faulty /dev/sde
# </computeroutput><userinput>[...]</userinput>
<computeroutput>[...]
# </computeroutput><userinput>mdadm --detail /dev/md1</userinput>
<computeroutput>/dev/md1:
[...]
Update Time : Wed May 6 09:49:08 2015
Tue Jun 25 11:10:47 2019
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 1
Spare Devices : 0

Consistency Policy : resync

Name : mirwiz:1 (local to host mirwiz)
debian)
UUID : 6ec558ca:0c2c04a0:19bca283:95f67464
7d123734:9677b7d6:72194f7d:9050771c
Events : 4139

Number Major Minor RaidDevice State
02 8 5096 0 active sync /dev/sdd2
21 8 80 1 active sync /dev/sdf

10 8 64 - faulty /dev/sde</computeroutput>
4 months ago
<computeroutput># </computeroutput><userinput>mdadm /dev/md1 --fail /dev/sde</userinput>
<computeroutput>mdadm: set /dev/sde faulty in /dev/md1
# </computeroutput><userinput>mdadm --detail /dev/md1</userinput>
<computeroutput>/dev/md1:
[...]
Update Time : Tue Jun 25 11:03:44 2019
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0

Consistency Policy : resync

Name : mirwiz:1 (local to host debian)
UUID : 7d123734:9677b7d6:72194f7d:9050771c
Events : 20

Number Major Minor RaidDevice State
- 0 0 0 removed
1 8 80 1 active sync /dev/sdd2

0 8 64 - faulty /dev/sde</computeroutput>
<computeroutput># </computeroutput><userinput>mdadm /dev/md1 --fail /dev/sde</userinput>
<computeroutput>mdadm: set /dev/sde faulty in /dev/md1
# </computeroutput><userinput>mdadm --detail /dev/md1</userinput>
<computeroutput>/dev/md1:
[...]
Update Time : Wed May 6 09:39:39 2015
Tue Jun 25 11:03:44 2019
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0

Consistency Policy : resync

Name : mirwiz:1 (local to host mirwiz)
debian)
UUID : 6ec558ca:0c2c04a0:19bca283:95f67464
7d123734:9677b7d6:72194f7d:9050771c
Events : 1920

Number Major Minor RaidDevice State
0- 80 5 0 0 active sync /dev/sdd2
removed
1 2 8 0 80 2 removed1 active sync /dev/sdd2

10 8 64 - faulty /dev/sde</computeroutput>
4 months ago
<computeroutput># </computeroutput><userinput>mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdd2 /dev/sde</userinput>
<computeroutput>mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: largest drive (/dev/sdd2) exceeds size (4192192K) by more than 1%
Continue creating array? </computeroutput><userinput>y</userinput>
<computeroutput>mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
# </computeroutput><userinput>mdadm --query /dev/md1</userinput>
<computeroutput>/dev/md1: 4.00GiB raid1 2 devices, 0 spares. Use mdadm --detail for more detail.
# </computeroutput><userinput>mdadm --detail /dev/md1</userinput>
<computeroutput>/dev/md1:
Version : 1.2
Creation Time : Tue Jun 25 10:21:22 2019
Raid Level : raid1
Array Size : 4189184 (4.00 GiB 4.29 GB)
Used Dev Size : 4189184 (4.00 GiB 4.29 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Tue Jun 25 10:22:03 2019
State : clean, resyncing
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Consistency Policy : resync

Resync Status : 93% complete

Name : mirwiz:1 (local to host debian)
UUID : 7d123734:9677b7d6:72194f7d:9050771c
Events : 16

Number Major Minor RaidDevice State
0 8 64 0 active sync /dev/sdd2
1 8 80 1 active sync /dev/sde
# </computeroutput><userinput>mdadm --detail /dev/md1</userinput>
<computeroutput>/dev/md1:
[...]
State : clean
[...]
</computeroutput>
<computeroutput># </computeroutput><userinput>mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdd2 /dev/sde</userinput>
<computeroutput>mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: largest drive (/dev/sdd2) exceeds size (4192192K) by more than 1%
Continue creating array? </computeroutput><userinput>y</userinput>
<computeroutput>mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
# </computeroutput><userinput>mdadm --query /dev/md1</userinput>
<computeroutput>/dev/md1: 4.00GiB raid1 2 devices, 0 spares. Use mdadm --detail for more detail.
# </computeroutput><userinput>mdadm --detail /dev/md1</userinput>
<computeroutput>/dev/md1:
Version : 1.2
Creation Time : Wed May 6 09:30:19Tue Jun 25 10:21:22 2015
9
Raid Level : raid1
Array Size : 419219289184 (4.00 GiB 4.29 GB)
Used Dev Size : 419219289184 (4.00 GiB 4.29 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Wed May 6 09:30:40 2015
Tue Jun 25 10:22:03 2019
State : clean, resyncing (PENDING)

Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Consistency Policy : resync

Resync Status : 93% complete

Name : mirwiz:1 (local to host mirwiz)
debian)
UUID : 6ec558ca:0c2c04a0:19bca283:95f67464
7d123734:9677b7d6:72194f7d:9050771c
Events : 016

Number Major Minor RaidDevice State
0 8 5064 0 active sync /dev/sdd2
1 8 6480 1 active sync /dev/sde
# </computeroutput><userinput>mdadm --detail /dev/md1</userinput>
<computeroutput>/dev/md1:
[...]
State : clean
[...]
</computeroutput>
4 months ago
<computeroutput># </computeroutput><userinput>mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sdb /dev/sdc</userinput>
<computeroutput>mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
# </computeroutput><userinput>mdadm --query /dev/md0</userinput>
<computeroutput>/dev/md0: 8.00GiB raid0 2 devices, 0 spares. Use mdadm --detail for more detail.
# </computeroutput><userinput>mdadm --detail /dev/md0</userinput>
<computeroutput>/dev/md0:
Version : 1.2
Creation Time : Tue Jun 25 08:47:49 2019
Raid Level : raid0
Array Size : 8378368 (7.99 GiB 8.58 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Tue Jun 25 08:47:49 2019
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Chunk Size : 512K

Consistency Policy : none

Name : mirwiz:0 (local to host debian)
UUID : 146e104f:66ccc06d:71c262d7:9af1fbc7
Events : 0

Number Major Minor RaidDevice State
0 8 32 0 active sync /dev/sdb
1 8 48 1 active sync /dev/sdc
# </computeroutput><userinput>mkfs.ext4 /dev/md0</userinput>
<computeroutput>mke2fs 1.44.5 (15-Dec-2018)
Discarding device blocks: done
Creating filesystem with 2094592 4k blocks and 524288 inodes
Filesystem UUID: 413c3dff-ab5e-44e7-ad34-cf1a029cfe98
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

# </computeroutput><userinput>mkdir /srv/raid-0</userinput>
<computeroutput># </computeroutput><userinput>mount /dev/md0 /srv/raid-0</userinput>
<computeroutput># </computeroutput><userinput>df -h /srv/raid-0</userinput>
<computeroutput>Filesystem Size Used Avail Use% Mounted on
/dev/md0 7.9G 36M 7.4G 1% /srv/raid-0
</computeroutput>
<computeroutput># </computeroutput><userinput>mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sdb /dev/sdc</userinput>
<computeroutput>mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
# </computeroutput><userinput>mdadm --query /dev/md0</userinput>
<computeroutput>/dev/md0: 8.00GiB raid0 2 devices, 0 spares. Use mdadm --detail for more detail.
# </computeroutput><userinput>mdadm --detail /dev/md0</userinput>
<computeroutput>/dev/md0:
Version : 1.2
Creation Time : Wed May 6 09:24:34 2015
Tue Jun 25 08:47:49 2019
Raid Level : raid0
Array Size : 8387584 (8.0078368 (7.99 GiB 8.598 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Wed May 6 09:24:34 2015
Tue Jun 25 08:47:49 2019
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Chunk Size : 512K

Consistency Policy : none

Name : mirwiz:0 (local to host mirwizdebian)
UUID : bb085b35:28e821bd:20d697c9:650152bb146e104f:66ccc06d:71c262d7:9af1fbc7
Events : 0

Number Major Minor RaidDevice State
0 8 1632 0 active sync /dev/sdb
1 8 3248 1 active sync /dev/sdc
# </computeroutput><userinput>mkfs.ext4 /dev/md0</userinput>
<computeroutput>mke2fs 1.42.12 (29-Aug4.5 (15-Dec-20148)
Discarding device blocks: done
Creating filesystem with 20951044592 4k blocks and 524288 inodes
Filesystem UUID: fff08295-bede-41a9-9c6a-8c7580e520a6413c3dff-ab5e-44e7-ad34-cf1a029cfe98
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done
Writing inode tables: done
Creating journal (3276816384 blocks): done
Writing superblocks and filesystem accounting information: done

# </computeroutput><userinput>mkdir /srv/raid-0</userinput>
<computeroutput># </computeroutput><userinput>mount /dev/md0 /srv/raid-0</userinput>
<computeroutput># </computeroutput><userinput>df -h /srv/raid-0</userinput>
<computeroutput>Filesystem Size Used Avail Use% Mounted on
/dev/md0 7.9G 1836M 7.4G 1% /srv/raid-0
</computeroutput>
4 months ago
Browse all translation changes

Statistics

Percent Strings Words Chars
Total 513 19,814 146,962
Translated 19% 101 1,482 14,396
Needs editing 4% 21 1,264 15,146
Failing checks 8% 44 2,144 22,054

Last activity

Last change Aug. 19, 2020, 2:49 p.m.
Last author Luna Jernberg

Daily activity

Daily activity

Weekly activity

Weekly activity