Start a Conversation

Unsolved

This post is more than 5 years old

3463

October 10th, 2016 07:00

Error : Max number of tokens exceeded for this user

I have been trying to integrate cinder with coprHD and have run into the problem mentioned in the title. Whenever I try to authenticate my user through Viprcli it returns the error mentioned.

./viprcli authenticate -u root -d /tmp/cinder

Password :

The token is not generated by authentication service.Max number of tokens exceeded for this user

The same message is seen on coprHD GUI. I am running Viprcli on my cinder host.

Coprhd.png

If someone can please explain why this error is returned and how to rectify it, it would be greatly appreciated. Thanking you in anticipation.

36 Posts

October 12th, 2016 08:00

Hi Mustafa,

thank you for your question

ViPR Controller has a limit internally of 100 tokens on users. If your automation code logs in with every execution but does not log out, then there is a strong chance that you would exceed 100 tokens for the account you are using.

Tokens take up to 8 hours to expire, unless logged out. There is also an api to expire all tokens, but one has to be logged in as administrator level user to issue such api call (or himself for his own tokens). 

Solutions I recommend is to script such that you either log out at the end of execution (or for any exception/exit procedure), or alternatively that your script implements a basic 'whoami' api call to check whether it is logged in, prior to actually attempting to log in.

I don't favor second option because chances are token in use token might expire right in the middle of your automated execution. I recommend you make sure you log out no matter what at the end of your script.

Stanislav

13 Posts

October 13th, 2016 00:00

Hello Mr. Stanislav,

Thank you so much for your response as I was quite stuck with this. The current situation is that my administrator account is locked out due to above error. how can I rectify this issue.

I haven't used any automation scripts. Following work has been done by me till now:

1. Deployed coprHD

2. Installed Viprcli and added Vipr drivers.

3. Added keystone as authentication provider ( had to do it manually as coprHD Keystone Auth provider was not adding the endpoint with correct region )

3. Restarted cinder driver.

When logs are viewed:

2016-10-13 06:45:36.869 130137 INFO cinder.volume.manager [req-dc5674b7-59e0-4e84-8074-bd005a756bf7 - - - - -] Starting volume driver EMCViPRFCDriver (N/A)

2016-10-13 06:45:36.881 130137 DEBUG oslo_db.sqlalchemy.session [req-dc5674b7-59e0-4e84-8074-bd005a756bf7 - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python2.7/dist-packages/oslo_db/sqlalchemy/session.py:513

2016-10-13 06:45:36.895 130137 DEBUG cinder.volume.manager [req-dc5674b7-59e0-4e84-8074-bd005a756bf7 - - - - -] Re-exporting 0 volumes init_host /usr/lib/python2.7/dist-packages/cinder/volume/manager.py:314

2016-10-13 06:45:36.901 130137 DEBUG cinder.volume.manager [req-dc5674b7-59e0-4e84-8074-bd005a756bf7 - - - - -] Resuming any in progress delete operations init_host /usr/lib/python2.7/dist-packages/cinder/volume/manager.py:375

2016-10-13 06:45:36.901 130137 INFO cinder.volume.manager [req-dc5674b7-59e0-4e84-8074-bd005a756bf7 - - - - -] Updating volume status

2016-10-13 06:45:36.901 130137 DEBUG cinder.volume.drivers.emc.vipr.fc [req-dc5674b7-59e0-4e84-8074-bd005a756bf7 - - - - -] Updating volume stats update_volume_stats /usr/lib/python2.7/dist-packages/cinder/volume/drivers/emc/vipr/fc.py:265

2016-10-13 06:45:36.901 130137 DEBUG cinder.volume.drivers.emc.vipr.common [req-dc5674b7-59e0-4e84-8074-bd005a756bf7 - - - - -] Updating volume stats update_volume_stats /usr/lib/python2.7/dist-packages/cinder/volume/drivers/emc/vipr/common.py:1508

2016-10-13 06:45:36.937 130137 ERROR cinder.openstack.common.threadgroup [req-b8846b05-2d4f-4479-a533-aebcc0b1dcfc - - - - -] Bad or unexpected response from the storage volume backend API:

ViPR Exception: The token is not generated by authentication service.Max number of tokens exceeded for this user

Stack Trace:

Traceback (most recent call last):

  File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/emc/vipr/common.py", line 136, in try_and_retry

    return func(*args, **kwargs)

  File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/emc/vipr/common.py", line 1509, in update_volume_stats

    self.authenticate_user()

  File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/emc/vipr/common.py", line 282, in authenticate_user

    cookie_path)

  File "/usr/lib/python2.7/dist-packages/bin/viprcli-3.0-py2.7.egg/viprcli/authentication.py", line 173, in authenticate_user

    "The token is not generated by authentication service."+details_str)

SOSError: 'The token is not generated by authentication service.Max number of tokens exceeded for this user'

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup Traceback (most recent call last):

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup  File "/usr/lib/python2.7/dist-packages/cinder/openstack/common/threadgroup.py", line 145, in wait

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup    x.wait()

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup  File "/usr/lib/python2.7/dist-packages/cinder/openstack/common/threadgroup.py", line 47, in wait

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup    return self.thread.wait()

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup  File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 175, in wait

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup    return self._exit_event.wait()

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup  File "/usr/lib/python2.7/dist-packages/eventlet/event.py", line 121, in wait

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup    return hubs.get_hub().switch()

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup  File "/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 294, in switch

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup    return self.greenlet.switch()

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup  File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 214, in main

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup    result = function(*args, **kwargs)

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup  File "/usr/lib/python2.7/dist-packages/cinder/openstack/common/service.py", line 488, in run_service

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup    service.start()

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup  File "/usr/lib/python2.7/dist-packages/cinder/service.py", line 144, in start

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup    self.manager.init_host()

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup  File "/usr/lib/python2.7/dist-packages/osprofiler/profiler.py", line 105, in wrapper

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup    return f(*args, **kwargs)

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup  File "/usr/lib/python2.7/dist-packages/osprofiler/profiler.py", line 105, in wrapper

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup    return f(*args, **kwargs)

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup  File "/usr/lib/python2.7/dist-packages/osprofiler/profiler.py", line 105, in wrapper

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup    return f(*args, **kwargs)

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup  File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 390, in init_host

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup    self.publish_service_capabilities(ctxt)

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup  File "/usr/lib/python2.7/dist-packages/osprofiler/profiler.py", line 105, in wrapper

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup    return f(*args, **kwargs)

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup  File "/usr/lib/python2.7/dist-packages/osprofiler/profiler.py", line 105, in wrapper

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup    return f(*args, **kwargs)

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup  File "/usr/lib/python2.7/dist-packages/osprofiler/profiler.py", line 105, in wrapper

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup    return f(*args, **kwargs)

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup  File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 1551, in publish_service_capabilities

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup    self._report_driver_status(context)

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup  File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 1507, in _report_driver_status

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup    volume_stats = self.driver.get_volume_stats(refresh=True)

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup  File "/usr/lib/python2.7/dist-packages/osprofiler/profiler.py", line 105, in wrapper

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup    return f(*args, **kwargs)

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup  File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/emc/vipr/fc.py", line 259, in get_volume_stats

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup    self.update_volume_stats()

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup  File "/usr/lib/python2.7/dist-packages/osprofiler/profiler.py", line 105, in wrapper

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup    return f(*args, **kwargs)

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup  File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/emc/vipr/fc.py", line 266, in update_volume_stats

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup    self._stats = self.common.update_volume_stats()

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup  File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/emc/vipr/common.py", line 149, in try_and_retry

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup    data=exception_message)

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API:

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup ViPR Exception: The token is not generated by authentication service.Max number of tokens exceeded for this user

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup Stack Trace:

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup Traceback (most recent call last):

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup  File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/emc/vipr/common.py", line 136, in try_and_retry

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup    return func(*args, **kwargs)

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup  File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/emc/vipr/common.py", line 1509, in update_volume_stats

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup    self.authenticate_user()

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup  File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/emc/vipr/common.py", line 282, in authenticate_user

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup    cookie_path)

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup  File "/usr/lib/python2.7/dist-packages/bin/viprcli-3.0-py2.7.egg/viprcli/authentication.py", line 173, in authenticate_user

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup    "The token is not generated by authentication service."+details_str)

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup SOSError: 'The token is not generated by authentication service.Max number of tokens exceeded for this user'

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup

2016-10-13 06:45:36.937 130137 TRACE cinder.openstack.common.threadgroup

2016-10-13 06:45:36.948 102782 INFO cinder.openstack.common.service [req-b8846b05-2d4f-4479-a533-aebcc0b1dcfc - - - - -] Child 130137 exited with status 0

2016-10-13 06:45:36.949 102782 INFO cinder.openstack.common.service [req-b8846b05-2d4f-4479-a533-aebcc0b1dcfc - - - - -] Forking too fast, sleeping

How do I move forward from here. As my CoprHD root Account is locked out from GUI as well. Any guidance/help would be greatly appreciated.

PS : My apologies for the long log.

Regards

Mustafa.

13 Posts

October 17th, 2016 02:00

Hello Mr. Stanislav,

I completely removed Viprcli from my cinder host, cleared the CoprHD DB and brought it back to fresh install state. Re-installed Viprcli again and as soon as I try to authenticate the same error is repeated. I am really stuck with this and unable to move forward.

A side note in this is that cinder successfully lists volume information

root@controller2:/etc/init.d# cinder --insecure extra-specs-list

+---------------------------------------------------------------------+-------+-------------+---------------------------------------------------------

|                                  ID                                                                                               |  Name  | extra_specs |                                                                         

+---------------------------------------------------------------------+-------+-------------+---------------------------------------------------------

| urn:storageos:VirtualPool:e152307a-0217-467d-be12-ad249a73fca5:vdc1 | vPool   |           {}             |

+---------------------------------------------------------------------+-------+-------------+----------------------------------------------------------

36 Posts

October 17th, 2016 07:00

are you trying to use ViPR as a storage provider for Openstack or are you trying to use Openstack Cinder as a storage provider for a 3rd party array that ViPR doesn't support?

13 Posts

October 17th, 2016 21:00

Hello, Thank you for your response. Basically, I'm trying to integrate EMC SAN storage with OpenStack Cloud. For this purpose I'm using CoprHD/ViPR controller.

So basically to my understanding any storage provisioned through ViPR GUI catalog or through ViPR CLI should be visible to Cinder. Which can then be provisioned in OpenStack.

Mustafa.

36 Posts

October 18th, 2016 07:00

yeah that is to use CopRHD to the south of Openstack, I am not familiar with that aspect, sorry man.

the best I can do is point you to this webpage - https://coprhd.atlassian.net/wiki/display/COP/Storage+Orchestration+For+OpenStack

13 Posts

October 18th, 2016 10:00

That's okay. I'm going to be trying to figure out a way to somehow refresh or release the tokens. Anyways thank you for all the help, it is much appreciated ! cheers.

No Events found!

Top