Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]Node status is OutOfDisk even has sufficient disk #28481

Closed
mdshuai opened this issue Jul 5, 2016 · 6 comments
Closed

[Bug]Node status is OutOfDisk even has sufficient disk #28481

mdshuai opened this issue Jul 5, 2016 · 6 comments
Assignees
Labels
priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@mdshuai
Copy link

mdshuai commented Jul 5, 2016

In my env there is sufficient disk but after run kubernetes all-in-one env(hack/local-up-cluster.sh), the node status is always "OutOfDisk"

1. Check node OutOfDisk status
[root@ip-172-18-9-37 amd64]# ./kubectl describe node/127.0.0.1 | grep -w OutOfDisk
  OutOfDisk         True    Tue, 05 Jul 2016 01:40:44 -0400     Tue, 05 Jul 2016 01:37:57 -0400     KubeletOutOfDisk        out of disk space
[root@ip-172-18-9-37 amd64]# df -lh
Filesystem                                       Size  Used Avail Use% Mounted on
/dev/xvda2                                        25G   13G   13G  49% /
devtmpfs                                         3.9G     0  3.9G   0% /dev
tmpfs                                            3.7G     0  3.7G   0% /dev/shm
tmpfs                                            3.7G   17M  3.7G   1% /run
tmpfs                                            3.7G     0  3.7G   0% /sys/fs/cgroup
/dev/mapper/docker--vg-openshift--xfs--vol--dir 1014M   33M  982M   4% /mnt/openshift-xfs-vol-dir
tmpfs                                            757M     0  757M   0% /run/user/0
tmpfs                                            3.7G   16K  3.7G   1% /data/src/github.com/openshift/origin/openshift.local.volumes/pods/413b5f76-425a-11e6-9b3d-0e3794ecd4b1/volumes/kubernetes.io~secret/registry-token-jm1uv
tmpfs                                            3.7G   16K  3.7G   1% /data/src/github.com/openshift/origin/openshift.local.volumes/pods/4247b8cf-425a-11e6-9b3d-0e3794ecd4b1/volumes/kubernetes.io~secret/router-token-bfq42
[root@ip-172-18-9-37 amd64]# docker info
Containers: 1
 Running: 0
 Paused: 0
 Stopped: 1
Images: 4
Server Version: 1.10.3
Storage Driver: devicemapper
 Pool Name: docker-202:2-117755246-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: xfs
 Data file: /dev/docker-vg/docker-data
 Metadata file: /dev/docker-vg/docker-metadata
 Data Space Used: 1.325 GB
 Data Space Total: 19.32 GB
 Data Space Available: 18 GB
 Metadata Space Used: 1.401 MB
 Metadata Space Total: 1.074 GB
 Metadata Space Available: 1.072 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: true
 Deferred Deletion Enabled: true
 Deferred Deleted Device Count: 0
 Library Version: 1.02.107-RHEL7 (2016-06-09)
Execution Driver: native-0.2
Logging Driver: journald
Plugins: 
 Volume: local
 Network: null host bridge
 Authorization: rhel-push-plugin
Kernel Version: 3.10.0-229.7.2.el7.x86_64
Operating System: Red Hat Enterprise Linux Server 7.2 (Maipo)
OSType: linux
Architecture: x86_64
Number of Docker Hooks: 2
CPUs: 2
Total Memory: 7.389 GiB
Name: ip-172-18-9-37.ec2.internal
ID: XWWJ:VSCM:IPAG:I46X:EWGM:FUS4:6STD:BVWQ:XQVL:ZQ5M:76C3:ETXX
WARNING: bridge-nf-call-ip6tables is disabled
Registries: docker.io (secure)
[root@ip-172-18-9-37 amd64]# curl -k http://localhost:4194/api/v2.1/storage | python -m json.tool
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   515  100   515    0     0   102k      0 --:--:-- --:--:-- --:--:--  125k
[
    {
        "available": 13834489856,
        "capacity": 26830942208,
        "device": "/dev/xvda2",
        "inodes_free": 25997876,
        "labels": [
            "root"
        ],
        "mountpoint": "/",
        "usage": 12996452352
    },
    {
        "available": 1029529600,
        "capacity": 1063256064,
        "device": "/dev/mapper/docker--vg-openshift--xfs--vol--dir",
        "inodes_free": 1048572,
        "labels": [],
        "mountpoint": "/mnt/openshift-xfs-vol-dir",
        "usage": 33726464
    },
    {
        "available": 17998282752,
        "capacity": 19323158528,
        "device": "docker-202:2-117755246-pool",
        "inodes_free": 0,
        "labels": [
            "docker-images"
        ],
        "mountpoint": "",
        "usage": 1324875776
    }
]

[root@ip-172-18-9-37 amd64]# ./kubectl version
Client Version: version.Info{Major:"1", Minor:"4+", GitVersion:"v1.4.0-alpha.0.1216+cd7a56ba4697ec", GitCommit:"cd7a56ba4697ecd49f8331cb6a2664dbfa4517d1", GitTreeState:"clean", BuildDate:"2016-07-05T01:36:43Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"4+", GitVersion:"v1.4.0-alpha.0.1216+cd7a56ba4697ec", GitCommit:"cd7a56ba4697ecd49f8331cb6a2664dbfa4517d1", GitTreeState:"clean", BuildDate:"2016-07-05T01:36:43Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}

@mdshuai
Copy link
Author

mdshuai commented Jul 5, 2016

@ncdc @derekwaynecarr Could you help check this. thanks so much.

@mdshuai
Copy link
Author

mdshuai commented Jul 5, 2016

In the same instance, I use branch release-1.3. the node status works well. OutOfDisk=false

[root@ip-172-18-9-37 amd64]# ./kubectl describe node/127.0.0.1 | grep -w OutOfDisk
  OutOfDisk         False   Tue, 05 Jul 2016 01:50:59 -0400     Tue, 05 Jul 2016 01:44:37 -0400     KubeletHasSufficientDisk    kubelet has sufficient disk space available
[root@ip-172-18-9-37 amd64]# curl -k http://localhost:4194/api/v2.1/storage | python -m json.tool
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   454  100   454    0     0  91274      0 --:--:-- --:--:-- --:--:--  110k
[
    {
        "available": 13834055680,
        "capacity": 26830942208,
        "device": "/dev/xvda2",
        "labels": [
            "root"
        ],
        "mountpoint": "/",
        "usage": 12996886528
    },
    {
        "available": 1029529600,
        "capacity": 1063256064,
        "device": "/dev/mapper/docker--vg-openshift--xfs--vol--dir",
        "labels": [],
        "mountpoint": "/mnt/openshift-xfs-vol-dir",
        "usage": 33726464
    },
    {
        "available": 17998282752,
        "capacity": 19323158528,
        "device": "docker-202:2-117755246-pool",
        "labels": [
            "docker-images"
        ],
        "mountpoint": "",
        "usage": 1324875776
    }
]

@derekwaynecarr derekwaynecarr self-assigned this Jul 5, 2016
@derekwaynecarr
Copy link
Member

hmm, will see if there was a recent regression.

@derekwaynecarr
Copy link
Member

I suspect this is related: #28176

@derekwaynecarr
Copy link
Member

/cc @ronnielai @vishh

@derekwaynecarr derekwaynecarr added this to the v1.4 milestone Jul 6, 2016
@derekwaynecarr derekwaynecarr added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Jul 6, 2016
@derekwaynecarr
Copy link
Member

devicemapper will not return inodes free since its block level.

{
        "available": 17998282752,
        "capacity": 19323158528,
        "device": "docker-202:2-117755246-pool",
        "inodes_free": 0,
        "labels": [
            "docker-images"
        ],
        "mountpoint": "",
        "usage": 1324875776
    }

we need to be able to differentiate knowing inodes free value from 0.

/cc @kubernetes/rh-cluster-infra

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests

2 participants