ast_errors
stringlengths
0
3.2k
d_id
int64
44
121k
id
int64
70
338k
n_whitespaces
int64
3
14k
path
stringlengths
8
134
n_words
int64
4
4.82k
n_identifiers
int64
1
131
random_cut
stringlengths
16
15.8k
commit_message
stringlengths
2
15.3k
fun_name
stringlengths
1
84
commit_id
stringlengths
40
40
repo
stringlengths
3
28
file_name
stringlengths
5
79
ast_levels
int64
6
31
nloc
int64
1
548
url
stringlengths
31
59
complexity
int64
1
66
token_counts
int64
6
2.13k
n_ast_errors
int64
0
28
vocab_size
int64
4
1.11k
n_ast_nodes
int64
15
19.2k
language
stringclasses
1 value
documentation
dict
code
stringlengths
101
62.2k
79,322
268,048
87
test/lib/ansible_test/_internal/test.py
30
6
def format_command(self) -> str: command = 'ansible-test %s' % self.command if self.test
ansible-test - Use more native type hints. (#78435) * ansible-test - Use more native type hints. Simple search and replace to switch from comments to native type hints for return types of functions with no arguments. * ansible-test - Use more native type hints. Conversion of simple single-line function annota...
format_command
3eb0485dd92c88cc92152d3656d94492db44b183
ansible
test.py
10
8
https://github.com/ansible/ansible.git
3
41
0
20
74
Python
{ "docstring": "Return a string representing the CLI command associated with the test failure.", "language": "en", "n_whitespaces": 11, "n_words": 12, "vocab_size": 11 }
def format_command(self) -> str: command = 'ansible-test %s' % self.command if self.test: command += ' --test %s' % self.test if self.python_version: command += ' --python %s' % self.python_version return command
47,131
194,950
343
projects/seeker/scripts/generate_lm_data.py
102
31
def act(self): obs = self.observation reply = {'text': INVALID, 'id': self.getID(), 'episode_done': False} if obs is None or obs['text'] == DO_NOT_RETRIEVE: return Message(reply) # construct the search query labels = obs.get('labels', obs.get('eval_labels', ...
SeeKeR (#4447) * seeker * todo * readme updates; add test * small config changes * various updates * readme fix * model card * add arxiv link * surround spacy with try catch * more protected * more protection of imports * lint
act
7e453008fde751aff0cfd752662e19fe2adc7410
ParlAI
generate_lm_data.py
13
25
https://github.com/facebookresearch/ParlAI.git
7
219
0
74
379
Python
{ "docstring": "\n Search for overlap with the observation label.\n\n Return the best fitting document. A document is valid if the f1 is above the\n threshold AND the f1 is less than 1.0 AND the target label is not in the\n document.\n ", "language": "en", "n_whitespaces": 75, ...
def act(self): obs = self.observation reply = {'text': INVALID, 'id': self.getID(), 'episode_done': False} if obs is None or obs['text'] == DO_NOT_RETRIEVE: return Message(reply) # construct the search query labels = obs.get('labels', obs.get('eval_labels', ...
84,618
284,002
31
openbb_terminal/forex/quantitative_analysis/qa_controller.py
10
9
def print_help(self): he
Adds QA and Pred to forex (#1652) * added qa and pred to forex * updated test help * Add forex/qa notebooks api wrapper * Add forex/qa tests * Add all menu commands to the integration test script Co-authored-by: Theodore Aptekarev <aptekarev@gmail.com>
print_help
5bf4618b398492f0ab2d09b3827467c7089831ec
OpenBBTerminal
qa_controller.py
9
33
https://github.com/OpenBB-finance/OpenBBTerminal.git
1
22
0
10
54
Python
{ "docstring": "Print help[cmds]\n pick pick target column for analysis[/cmds]\n\n[param]Pair: [/param]{self.ticker}\n[param]Target Column: [/param]{self.target}\n[cmds]\n[info]Statistics:[/info]\n summary brief summary statistics of loaded pair.\n normality normality statistics and tests\n un...
def print_help(self): help_text = f console.print(text=help_text, menu="Forex - Quantitative Analysis")
51,259
205,879
112
django/db/models/sql/query.py
30
11
def chain(self, klass=None): obj = self.clone() if klass and obj.__class__ != klass: obj.__class__ = klass if not obj.filter_is_sticky: obj.used_a
Refs #33476 -- Reformatted code with Black.
chain
9c19aff7c7561e3a82978a272ecdaad40dda5c00
django
query.py
10
10
https://github.com/django/django.git
5
64
0
22
108
Python
{ "docstring": "\n Return a copy of the current Query that's ready for another operation.\n The klass argument changes the type of the Query, e.g. UpdateQuery.\n ", "language": "en", "n_whitespaces": 45, "n_words": 23, "vocab_size": 20 }
def chain(self, klass=None): obj = self.clone() if klass and obj.__class__ != klass: obj.__class__ = klass if not obj.filter_is_sticky: obj.used_aliases = set() obj.filter_is_sticky = False if hasattr(obj, "_setup_query"): obj._setup_q...
1,280
7,846
209
tests/integration_tests/test_gbm.py
81
42
def run_test_gbm_non_number_inputs(tmpdir, backend_config): input_features = [binary_feature(), category_feature(encoder={"reduce_output": "sum"})] output_feature = binary_feature() output_features = [output_feature] csv_filename = os.path.join(tmpdir, "training.csv") dataset_filename = genera...
Bugfix: non-number inputs to GBM (#2418)
run_test_gbm_non_number_inputs
24f6583aa3b384aa6179c3579be600760897f1d8
ludwig
test_gbm.py
13
28
https://github.com/ludwig-ai/ludwig.git
2
222
0
65
354
Python
{ "docstring": "Test that the GBM model can train and predict with non-number inputs.", "language": "en", "n_whitespaces": 11, "n_words": 12, "vocab_size": 12 }
def run_test_gbm_non_number_inputs(tmpdir, backend_config): input_features = [binary_feature(), category_feature(encoder={"reduce_output": "sum"})] output_feature = binary_feature() output_features = [output_feature] csv_filename = os.path.join(tmpdir, "training.csv") dataset_filename = genera...
52,762
209,644
34
scapy/contrib/pnio_rpc.py
13
7
def i2len(self, pkt, val): fld_len = self.f
[MS-RPCE] and [MS-SMB] major update (#3683) * Various fixes regarding DCE/RPC build * DCE/RPC sessions * Cleanup unused code * Add missing GSS_WRAP algo names * Add find_dcerpc_interface * Split SMB client and server * Missing StrFixedLenFieldUtf16 * Remove unfinished smbserver feature * Friend...
i2len
ca10c5cf00425d0178998ec0b006cbb65ddbfb54
scapy
pnio_rpc.py
9
3
https://github.com/secdev/scapy.git
1
33
0
12
51
Python
{ "docstring": "get the length of the field, including the padding length", "language": "en", "n_whitespaces": 9, "n_words": 10, "vocab_size": 7 }
def i2len(self, pkt, val): fld_len = self.fld.i2len(pkt, val) return fld_len + self.padlen(fld_len, pkt)
23,525
109,326
121
lib/matplotlib/_mathtext.py
30
16
def get_kerning(self, next): advance = self._metrics.advance - self.width kern = 0. if isinstance
Replace MathtextBackend mechanism. The MathtextBackend ("MB") mechanism was previously used to let actual backends customize how they received mathtext results -- either as lists of glyphs and rectangles (for vector backends: MathtextBackendPath), or a bitmap (for raster backends: MathtextBackendAgg); in both cases, m...
get_kerning
349f8678f1cf225d6070a236cf41a5e1f044cb18
matplotlib
_mathtext.py
11
9
https://github.com/matplotlib/matplotlib.git
2
79
0
25
114
Python
{ "docstring": "\n Return the amount of kerning between this and the given character.\n\n This method is called when characters are strung together into `Hlist`\n to create `Kern` nodes.\n ", "language": "en", "n_whitespaces": 55, "n_words": 26, "vocab_size": 25 }
def get_kerning(self, next): advance = self._metrics.advance - self.width kern = 0. if isinstance(next, Char): kern = self.fontset.get_kern( self.font, self.font_class, self.c, self.fontsize, next.font, next.font_class, next.c, next.fontsize, ...
80,845
271,691
34
keras/engine/training_generator_v1.py
15
10
def _get_num_samples_or_steps(data, steps_per_epoch): flat_inputs = tf.nest.flatten(data) if hasattr(flat_inputs[0], "shape"): return int(flat_inputs[0].shape[0]), False return steps_per_epoch, True
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
_get_num_samples_or_steps
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
training_generator_v1.py
13
5
https://github.com/keras-team/keras.git
2
48
0
14
77
Python
{ "docstring": "Returns number of samples or steps, and whether to use steps count mode.", "language": "en", "n_whitespaces": 12, "n_words": 13, "vocab_size": 13 }
def _get_num_samples_or_steps(data, steps_per_epoch): flat_inputs = tf.nest.flatten(data) if hasattr(flat_inputs[0], "shape"): return int(flat_inputs[0].shape[0]), False return steps_per_epoch, True
4,980
26,394
73
saleor/graphql/product/tests/test_attributes.py
34
13
def test_retrieve_product_attributes_input_type(staff_api_client, product, channel_USD): query = variables = {"channel": channel_USD.slug} found_products = get_graphql_content( staff_api_client.post_graphql(query, variables) )["data"]["products"]["edges"] assert len(found_products) == 1 ...
Better permissions (#9363) * Better permissions * Add OWNER permission * WIP Add enums to represent function-based permissions * Rename OWNER to IS_OWNER * Add flag to skip autogenerated permission message * Rename InternalPermissions to PermissionFunctions * Add permission descriptions for meta muta...
test_retrieve_product_attributes_input_type
ab45ebda5a14df6806046fd552e2c6d08f025503
saleor
test_attributes.py
13
24
https://github.com/saleor/saleor.git
2
87
0
26
156
Python
{ "docstring": "\n query ($channel: String){\n products(first: 10, channel: $channel) {\n edges {\n node {\n attributes {\n values {\n inputType\n }\n }\n }\n }\n ...
def test_retrieve_product_attributes_input_type(staff_api_client, product, channel_USD): query = variables = {"channel": channel_USD.slug} found_products = get_graphql_content( staff_api_client.post_graphql(query, variables) )["data"]["products"]["edges"] assert len(found_products) == 1 ...
51,087
205,311
606
django/db/migrations/migration.py
124
21
def apply(self, project_state, schema_editor, collect_sql=False): for operation in self.operations: # If this operation cannot be represented as SQL, place a comment # there instead if collect_sql: schema_editor.collected_sql.append("--") ...
Refs #33476 -- Reformatted code with Black.
apply
9c19aff7c7561e3a82978a272ecdaad40dda5c00
django
migration.py
15
27
https://github.com/django/django.git
9
160
0
87
266
Python
{ "docstring": "\n Take a project_state representing all migrations prior to this one\n and a schema_editor for a live database and apply the migration\n in a forwards order.\n\n Return the resulting project state for efficient reuse by following\n Migrations.\n ", "languag...
def apply(self, project_state, schema_editor, collect_sql=False): for operation in self.operations: # If this operation cannot be represented as SQL, place a comment # there instead if collect_sql: schema_editor.collected_sql.append("--") ...
30,960
136,637
440
python/ray/autoscaler/_private/kuberay/node_provider.py
122
25
def safe_to_scale(self) -> bool: # Get the list of nodes. node_set = set(self.node_data_dict.keys()) worker_groups = self._raycluster["spec"].get("workerGroupSpecs", []) # Accumulates the indices of worker groups with non-empty workersToDelete non_empty_worker_group_ind...
KubeRay node provider refactor (#30281) Implements KubeRay node provider as a "BatchingNodeProvider". Builds on #29933. Summary of design An autoscaler update now works like this: list pod data from k8s check if it's safe to proceed with update. Abort the update if not. do some internal calculation to determ...
safe_to_scale
c976799dfd96806ec9972a287835f7a034ec3d2c
ray
node_provider.py
15
40
https://github.com/ray-project/ray.git
7
147
0
79
262
Python
{ "docstring": "Returns False iff non_terminated_nodes contains any pods in the RayCluster's\n workersToDelete lists.\n\n Explanation:\n If there are any workersToDelete which are non-terminated,\n we should wait for the operator to do its job and delete those\n pods. Therefore, we ...
def safe_to_scale(self) -> bool: # Get the list of nodes. node_set = set(self.node_data_dict.keys()) worker_groups = self._raycluster["spec"].get("workerGroupSpecs", []) # Accumulates the indices of worker groups with non-empty workersToDelete non_empty_worker_group_ind...
50,543
203,818
240
django/contrib/gis/db/backends/oracle/operations.py
58
14
def get_distance(self, f, value, lookup_type): if not value: return [] value = value[0] if isinstance(value, Distance): if f.geodetic(self.connection): dist_param = value.m else: dist_param = getattr( ...
Refs #33476 -- Reformatted code with Black.
get_distance
9c19aff7c7561e3a82978a272ecdaad40dda5c00
django
operations.py
18
16
https://github.com/django/django.git
5
89
0
42
148
Python
{ "docstring": "\n Return the distance parameters given the value and the lookup type.\n On Oracle, geometry columns with a geodetic coordinate system behave\n implicitly like a geography column, and thus meters will be used as\n the distance parameter on them.\n ", "language": "e...
def get_distance(self, f, value, lookup_type): if not value: return [] value = value[0] if isinstance(value, Distance): if f.geodetic(self.connection): dist_param = value.m else: dist_param = getattr( ...
16,326
74,848
38
wagtail/documents/tests/test_models.py
10
10
def test_standard_get_document_model(self): del settings.WAGTAILDOCS_DOCUMENT_MODEL from wagtail.documents.models import Document self.assertIs(get_document_model(), Document)
Reformat with black
test_standard_get_document_model
d10f15e55806c6944827d801cd9c2d53f5da4186
wagtail
test_models.py
9
4
https://github.com/wagtail/wagtail.git
1
28
0
10
46
Python
{ "docstring": "Test get_document_model with no WAGTAILDOCS_DOCUMENT_MODEL", "language": "en", "n_whitespaces": 4, "n_words": 5, "vocab_size": 5 }
def test_standard_get_document_model(self): del settings.WAGTAILDOCS_DOCUMENT_MODEL from wagtail.documents.models import Document self.assertIs(get_document_model(), Document)
76,908
261,639
92
sklearn/utils/__init__.py
51
8
def _safe_assign(X, values, *, row_indexer=None, column_indexer=None): row_indexer = slice(None, None, None) if row_indexer is None else row_indexer column_indexer = ( slice(None, None, None) if column_indexer is None else column_indexer ) if
MAINT test globally setting output via context manager (#24932) Co-authored-by: jeremie du boisberranger <jeremiedbb@yahoo.fr>
_safe_assign
af16e5934ae269d05fd7df983b97def7c0ef0bd2
scikit-learn
__init__.py
10
9
https://github.com/scikit-learn/scikit-learn.git
4
80
0
33
120
Python
{ "docstring": "Safe assignment to a numpy array, sparse matrix, or pandas dataframe.\n\n Parameters\n ----------\n X : {ndarray, sparse-matrix, dataframe}\n Array to be modified. It is expected to be 2-dimensional.\n\n values : ndarray\n The values to be assigned to `X`.\n\n row_indexer ...
def _safe_assign(X, values, *, row_indexer=None, column_indexer=None): row_indexer = slice(None, None, None) if row_indexer is None else row_indexer column_indexer = ( slice(None, None, None) if column_indexer is None else column_indexer ) if hasattr(X, "iloc"): # pandas dataframe ...
55,690
219,662
31
python3.10.4/Lib/_pydecimal.py
10
6
def copy_sign(self, a, b): a = _convert_other(a, raiseit=True) return a.copy_sign(b)
add python 3.10.4 for windows
copy_sign
8198943edd73a363c266633e1aa5b2a9e9c9f526
XX-Net
_pydecimal.py
9
3
https://github.com/XX-net/XX-Net.git
1
27
0
10
43
Python
{ "docstring": "Copies the second operand's sign to the first one.\n\n In detail, it returns a copy of the first operand with the sign\n equal to the sign of the second operand.\n\n >>> ExtendedContext.copy_sign(Decimal( '1.50'), Decimal('7.33'))\n Decimal('1.50')\n >>> ExtendedCont...
def copy_sign(self, a, b): a = _convert_other(a, raiseit=True) return a.copy_sign(b)
15,591
70,979
319
wagtail/contrib/forms/views.py
82
21
def get_validated_ordering(self): orderable_fields = self.orderable_fields or ()
Fix warnings from flake8-comprehensions.
get_validated_ordering
de3fcba9e95818e9634ab7de6bfcb1f4221f2775
wagtail
views.py
16
20
https://github.com/wagtail/wagtail.git
11
122
0
58
205
Python
{ "docstring": " Return a dict of field names with ordering labels if ordering is valid ", "language": "en", "n_whitespaces": 14, "n_words": 13, "vocab_size": 12 }
def get_validated_ordering(self): orderable_fields = self.orderable_fields or () ordering = {} if self.is_export: # Revert to CSV order_by submit_time ascending for backwards compatibility default_ordering = self.ordering_csv or () else: defa...
@pytest.mark.parametrize("p", (3, 5, 7, 11, 13))
42,333
177,309
83
networkx/generators/tests/test_expanders.py
48
9
def test_chordal_cycle_graph(p): G = nx.chordal_cycle_graph(p) assert len(G) == p # TODO The second largest eigenvalue should be smaller than a constant, # independent of the number of nodes in the graph: # # eigs = sorted(sp.linalg.eigvalsh(nx.adjacency_matrix(G).toarray())) # ...
Minor updates to expanders generator tests (#6027) * Split MGG test into two based on dependencies. * Parametrize tests on prime numbers. * Use fns from nx namespace, rm explicit imports. * Parametrize exception test and check message.
test_chordal_cycle_graph
06dc63c62822a56d3a8ed36c65630298d8954cff
networkx
test_expanders.py
8
3
https://github.com/networkx/networkx.git
1
21
1
39
74
Python
{ "docstring": "Test for the :func:`networkx.chordal_cycle_graph` function.", "language": "en", "n_whitespaces": 4, "n_words": 5, "vocab_size": 5 }
def test_chordal_cycle_graph(p): G = nx.chordal_cycle_graph(p) assert len(G) == p # TODO The second largest eigenvalue should be smaller than a constant, # independent of the number of nodes in the graph: # # eigs = sorted(sp.linalg.eigvalsh(nx.adjacency_matrix(G).toarray())) # ...
79,938
269,171
674
keras/utils/dataset_utils.py
278
12
def convert_dataset_split_sizes(left_size,right_size,total_size): left_size_type = type(left_size) right_size_type = type(right_size) if left_size is not None and left_size_type not in [int,float]: raise ValueError(f'Invalid `left_size` type Got {left_size_type}' 'It should be one ...
fixes dataset slicing errors
convert_dataset_split_sizes
a127de7007fe49413bd9167e179f5df12b6c100e
keras
dataset_utils.py
13
51
https://github.com/keras-team/keras.git
25
278
0
115
496
Python
{ "docstring": "Helper function to convert left_size/right_size relative to dataset's size\n ", "language": "en", "n_whitespaces": 11, "n_words": 9, "vocab_size": 8 }
def convert_dataset_split_sizes(left_size,right_size,total_size): left_size_type = type(left_size) right_size_type = type(right_size) if left_size is not None and left_size_type not in [int,float]: raise ValueError(f'Invalid `left_size` type Got {left_size_type}' 'It should be one ...
17,116
80,945
18
awx/main/managers.py
4
10
def active_count(self): return self.order_by().exclude(inventory_sources__source='controller').values(name_lower=Lower('name')).distinct().count()
Fixes case sensitive host count
active_count
f52ef6e9677b01c111b012a8725da43a2580d8f1
awx
managers.py
15
2
https://github.com/ansible/awx.git
1
37
0
4
68
Python
{ "docstring": "Return count of active, unique hosts for licensing.\n Construction of query involves:\n - remove any ordering specified in model's Meta\n - Exclude hosts sourced from another Tower\n - Restrict the query to only return the name column\n - Only consider results th...
def active_count(self): return self.order_by().exclude(inventory_sources__source='controller').values(name_lower=Lower('name')).distinct().count()
73,183
249,886
80
tests/handlers/test_sso.py
27
9
async def test_set_avatar_incorrect_mime_type(self) -> None: handler = self.hs.get_sso_handler() # any random user works since image check is supposed to fail us
Add support for handling avatar with SSO login (#13917) This commit adds support for handling a provided avatar picture URL when logging in via SSO. Signed-off-by: Ashish Kumar <ashfame@users.noreply.github.com> Fixes #9357.
test_set_avatar_incorrect_mime_type
09de2aecb05cb46e0513396e2675b24c8beedb68
synapse
test_sso.py
12
7
https://github.com/matrix-org/synapse.git
1
38
0
26
70
Python
{ "docstring": "Tests that saving an avatar fails when its mime type is not allowed", "language": "en", "n_whitespaces": 12, "n_words": 13, "vocab_size": 13 }
async def test_set_avatar_incorrect_mime_type(self) -> None: handler = self.hs.get_sso_handler() # any random user works since image check is supposed to fail user_id = "@sso-user:test" self.assertFalse( self.get_success(handler.set_avatar(user_id, "http://my.serve...
81,585
276,201
118
keras/saving/saved_model/utils.py
35
10
def layer_uses_training_bool(layer): if layer._expects_training_arg: # pylint: disable=protected-access return True visited = {layer} to_visit = list_all_layers(la
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
layer_uses_training_bool
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
utils.py
11
14
https://github.com/keras-team/keras.git
5
69
0
27
117
Python
{ "docstring": "Returns whether this layer or any of its children uses the training arg.", "language": "en", "n_whitespaces": 12, "n_words": 13, "vocab_size": 13 }
def layer_uses_training_bool(layer): if layer._expects_training_arg: # pylint: disable=protected-access return True visited = {layer} to_visit = list_all_layers(layer) while to_visit: layer = to_visit.pop() if layer in visited: continue if getattr(layer,...
# This is basically test_edgeql_for_in_computable_01 but with # a WITH binding in front of the whole shape await self.assert_query_result( r''' WITH U := ( SELECT User { select_deck := ( ...
41,674
176,084
54
tests/test_edgeql_for.py
23
8
async def test_edgeql_for_in_computable_09(self): # This
Add a `bag` type that tells assert_query_result to ignore order (#3314) assert_query_result currently supports using sets to ignore order, but that doesn't work for objects, which can't be hashed or sorted. There is a system for specifying a sort key for internal data, but it is way clunkier than just saying we d...
test_edgeql_for_in_computable_09
26be7d28bdb4eb96c888e373e08f46e6b85711e3
edgedb
test_edgeql_for.py
6
30
https://github.com/edgedb/edgedb.git
1
48
2
22
34
Python
{ "docstring": "\n WITH\n U := (\n SELECT User {\n select_deck := (\n FOR letter IN {'I', 'B'}\n UNION (\n SELECT User.deck {\n ...
async def test_edgeql_for_in_computable_09(self): # This is basically test_edgeql_for_in_computable_01 but with # a WITH binding in front of the whole shape await self.assert_query_result( r
13,815
65,173
33
erpnext/accounts/report/budget_variance_report/budget_variance_report.py
52
26
def get_actual_details(name, filters): budget_against = frappe.scrub(filters.get("budget_against")) cond = "" if filters.get("budget_against") == "Cost Center": cc_lft, cc_rgt = frappe.db.get_value("Cost Center", name, ["lft", "rgt"]) cond = .format( lft=cc_lft, rgt=cc_rgt ) ac_details = frappe.db.sql( ...
style: format code with black
get_actual_details
494bd9ef78313436f0424b918f200dab8fc7c20b
erpnext
budget_variance_report.py
12
53
https://github.com/frappe/erpnext.git
3
138
0
43
223
Python
{ "docstring": "\n\t\t\t\tand lft >= \"{lft}\"\n\t\t\t\tand rgt <= \"{rgt}\"\n\t\t\t\n\t\t\tselect\n\t\t\t\tgl.account,\n\t\t\t\tgl.debit,\n\t\t\t\tgl.credit,\n\t\t\t\tgl.fiscal_year,\n\t\t\t\tMONTHNAME(gl.posting_date) as month_name,\n\t\t\t\tb.{budget_against} as budget_against\n\t\t\tfrom\n\t\t\t\t`tabGL Entry` gl...
def get_actual_details(name, filters): budget_against = frappe.scrub(filters.get("budget_against")) cond = "" if filters.get("budget_against") == "Cost Center": cc_lft, cc_rgt = frappe.db.get_value("Cost Center", name, ["lft", "rgt"]) cond = .format( lft=cc_lft, rgt=cc_rgt ) ac_details = frappe.db.sql( ...
4,477
22,864
68
VoiceAssistant/Project_Basic_struct/textRead.py
28
8
def print_index(toc): dash = "-"*(100 - 7) spa
VoiceAssistant This is Voice Assistant coded using Python which can do the following: - 1. Speak Text entered by User. 2. Search anything on Google. 3. Search anything on Wikipedia. 4. Read an MS Word(docx) document. 5. Read a book(PDF). 6. Can be used as a Dictator.
print_index
39c49e07066b2a53e176d555af6a7bf8aabb8a9c
Python
textRead.py
14
8
https://github.com/geekcomputers/Python.git
2
55
0
24
131
Python
{ "docstring": "Prints out the index in proper format with title name and page number\r\n\r\n Args:\r\n toc (nested list): toc[1] - Topic name\r\n toc[2] - Page number\r\n ", "language": "en", "n_whitespaces": 64, "n_words": 25, "vocab_size": 22 }
def print_index(toc): dash = "-"*(100 - 7) space = " "*47 print(f"{space}INDEX") print(f"\n\nName : {dash} PageNo.\n\n\n") for topic in toc: eq_dash = "-"*(100 - len(topic[1])) print(f"{topic[1]} {eq_dash} {topic[2]}")
22,463
106,836
304
py/visdom/__init__.py
63
18
def pie(self, X, win=None, env=None, opts=None): X = np.squeeze(X) assert X.ndim == 1, "X should be one-dimensional" assert np.all(np.greater_equal(X, 0)), "X cannot contain negative values" opts = {} if opts is None else opts _title2str(opts) _assert_opts(opts...
apply black py to all python files
pie
5b8b7f267cfaf76a2a39a727ef31a62b3909a093
visdom
__init__.py
12
23
https://github.com/fossasia/visdom.git
2
128
0
55
213
Python
{ "docstring": "\n This function draws a pie chart based on the `N` tensor `X`.\n\n The following `opts` are supported:\n\n - `opts.legend`: `list` containing legend names\n ", "language": "en", "n_whitespaces": 52, "n_words": 23, "vocab_size": 23 }
def pie(self, X, win=None, env=None, opts=None): X = np.squeeze(X) assert X.ndim == 1, "X should be one-dimensional" assert np.all(np.greater_equal(X, 0)), "X cannot contain negative values" opts = {} if opts is None else opts _title2str(opts) _assert_opts(opts...
9,046
46,963
30
airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py
9
10
def dry_run(self) -> None: pod = self.build_pod_request_obj()
Cleanup dup code now that k8s provider requires 2.3.0+ (#22845)
dry_run
04082ac091e92587b22c8323170ebe38bc68a19a
airflow
kubernetes_pod.py
13
8
https://github.com/apache/airflow.git
1
35
0
9
62
Python
{ "docstring": "\n Prints out the pod definition that would be created by this operator.\n Does not include labels specific to the task instance (since there isn't\n one in a dry_run) and excludes all empty elements.\n ", "language": "en", "n_whitespaces": 62, "n_words": 33, "vocab...
def dry_run(self) -> None: pod = self.build_pod_request_obj() print(yaml.dump(prune_dict(pod.to_dict(), mode='strict')))
48,111
196,693
20
sympy/stats/crv_types.py
17
7
def ExponentialPower(name, mu, alpha, beta): r return rv(name, ExponentialPowerDistribution, (mu, alpha, beta)) #------------------------------------------------------------------------------- # F distribution ----------------------------
Documentation cleanup 5
ExponentialPower
9ad8ab9fe58051cf11626ba6654852fcfec60147
sympy
crv_types.py
8
63
https://github.com/sympy/sympy.git
1
28
0
16
40
Python
{ "docstring": "\n Create a Continuous Random Variable with Exponential Power distribution.\n This distribution is known also as Generalized Normal\n distribution version 1.\n\n Explanation\n ===========\n\n The density of the Exponential Power distribution is given by\n\n .. math::\n f(x)...
def ExponentialPower(name, mu, alpha, beta): r return rv(name, ExponentialPowerDistribution, (mu, alpha, beta)) #------------------------------------------------------------------------------- # F distribution ---------------------------------------------------------------
1,559
9,135
211
parsing/dml_csr/utils/miou.py
99
8
def get_palette(num_cls): n = num_cls palette = [0] * (n * 3) for j in range(0, n): lab = j palette[j * 3 + 0] = 0 palette[j * 3 + 1] = 0 palette[j * 3 + 2] = 0 i = 0 while lab: palette[j * 3 + 0] |= (((lab >> 0) & 1) << (7 - i)) ...
Create miou.py
get_palette
995b44897fe6158bb70ad03a3e79f517f65f9034
insightface
miou.py
16
16
https://github.com/deepinsight/insightface.git
3
161
0
41
239
Python
{ "docstring": " Returns the color map for visualizing the segmentation mask.\n Args:\n num_cls: Number of classes\n Returns:\n The color map\n ", "language": "en", "n_whitespaces": 42, "n_words": 18, "vocab_size": 15 }
def get_palette(num_cls): n = num_cls palette = [0] * (n * 3) for j in range(0, n): lab = j palette[j * 3 + 0] = 0 palette[j * 3 + 1] = 0 palette[j * 3 + 2] = 0 i = 0 while lab: palette[j * 3 + 0] |= (((lab >> 0) & 1) << (7 - i)) ...
12,067
60,287
348
code/deep/BJMMD/caffe/python/caffe/pycaffe.py
144
37
def _Net_forward_backward_all(self, blobs=None, diffs=None, **kwargs): # Batch blobs and diffs. all_outs = {out: [] for out in set(self.outputs + (blobs or []))} all_diffs = {diff: [] for diff in set(self.inputs + (diffs or []))} forward_batches = self._batch({in_:
Balanced joint maximum mean discrepancy for deep transfer learning
_Net_forward_backward_all
cc4d0564756ca067516f71718a3d135996525909
transferlearning
pycaffe.py
14
23
https://github.com/jindongwang/transferlearning.git
15
326
0
90
500
Python
{ "docstring": "\n Run net forward + backward in batches.\n\n Parameters\n ----------\n blobs: list of blobs to extract as in forward()\n diffs: list of diffs to extract as in backward()\n kwargs: Keys are input (for forward) and output (for backward) blob names\n and values are ndarrays....
def _Net_forward_backward_all(self, blobs=None, diffs=None, **kwargs): # Batch blobs and diffs. all_outs = {out: [] for out in set(self.outputs + (blobs or []))} all_diffs = {diff: [] for diff in set(self.inputs + (diffs or []))} forward_batches = self._batch({in_: kwargs[in_] ...
72,737
249,233
94
tests/rest/admin/test_device.py
19
13
def test_user_does_not_exist(self) -> None: url = "/_synapse/admin/v2/users/@unknown_person:test/devices" channe
Use literals in place of `HTTPStatus` constants in tests (#13479) Replace - `HTTPStatus.NOT_FOUND` - `HTTPStatus.FORBIDDEN` - `HTTPStatus.UNAUTHORIZED` - `HTTPStatus.CONFLICT` - `HTTPStatus.CREATED` Signed-off-by: Dirk Klimpel <dirk@klimpel.org>
test_user_does_not_exist
1595052b2681fb86c1c1b9a6028c1bc0d38a2e4b
synapse
test_device.py
10
12
https://github.com/matrix-org/synapse.git
1
59
0
18
96
Python
{ "docstring": "\n Tests that a lookup for a user that does not exist returns a 404\n ", "language": "en", "n_whitespaces": 29, "n_words": 14, "vocab_size": 11 }
def test_user_does_not_exist(self) -> None: url = "/_synapse/admin/v2/users/@unknown_person:test/devices" channel = self.make_request( "GET", url, access_token=self.admin_user_tok, ) self.assertEqual(404, channel.code, msg=channel.json_body) ...
71,133
246,289
364
synapse/replication/tcp/protocol.py
66
20
def send_ping(self) -> None: now = self.clock.time_msec() if self.time_we_closed: if now - self.time_we_closed > PING_TIMEOUT_MS: logger.info( "[%s] Failed to close connection gracefully, aborting", self.id() ) ass...
Add missing type hints to synapse.replication. (#11938)
send_ping
d0e78af35e519ff76bd23e786007f3e7130d90f7
synapse
protocol.py
16
25
https://github.com/matrix-org/synapse.git
6
120
0
52
199
Python
{ "docstring": "Periodically sends a ping and checks if we should close the connection\n due to the other side timing out.\n ", "language": "en", "n_whitespaces": 33, "n_words": 19, "vocab_size": 18 }
def send_ping(self) -> None: now = self.clock.time_msec() if self.time_we_closed: if now - self.time_we_closed > PING_TIMEOUT_MS: logger.info( "[%s] Failed to close connection gracefully, aborting", self.id() ) ass...
31,619
139,165
86
python/ray/workflow/workflow_context.py
27
13
def workflow_logging_context(job_id) -> None: node = ray.worker._global_node original_out_file, original_err_file = node.get_log_file_handles( get_worker_log_file_name("WORKER") ) out_file, err_file = node.get_log_file_handles( get_worker_log_file_name("WORKER", job_id) ) tr...
[Workflow]Make workflow logs publish to the correct driver. (#24089) All workflow tasks are executed as remote functions that submitted from WorkflowManagmentActor. WorkflowManagmentActor is a detached long-running actor whose owner is the first driver in the cluster that runs the very first workflow execution. Theref...
workflow_logging_context
e8fc66af348f2afd2b578fe1c6776cc88ea82499
ray
workflow_context.py
11
27
https://github.com/ray-project/ray.git
2
60
0
23
104
Python
{ "docstring": "Initialize the workflow logging context.\n\n Workflow executions are running as remote functions from\n WorkflowManagementActor. Without logging redirection, workflow\n inner execution logs will be pushed to the driver that initially\n created WorkflowManagementActor rather than the driver...
def workflow_logging_context(job_id) -> None: node = ray.worker._global_node original_out_file, original_err_file = node.get_log_file_handles( get_worker_log_file_name("WORKER") ) out_file, err_file = node.get_log_file_handles( get_worker_log_file_name("WORKER", job_id) ) tr...
5,455
30,270
87
spotdl/console/entry_point.py
25
14
def console_entry_point(): if "--profile" in sys.argv: with cProfile.Profile() as profile: entry_point() stats = pstats.Stats(profile) stats.sort_stats(pstats.SortKey.TIME)
added option to profile code fized pylint warnings
console_entry_point
cf9030f843079d3f69cd1414050f8b594c84cee1
spotify-downloader
entry_point.py
12
9
https://github.com/spotDL/spotify-downloader.git
2
53
0
24
101
Python
{ "docstring": "\n Wrapper around `entry_point` so we can profile the code\n ", "language": "en", "n_whitespaces": 16, "n_words": 9, "vocab_size": 9 }
def console_entry_point(): if "--profile" in sys.argv: with cProfile.Profile() as profile: entry_point() stats = pstats.Stats(profile) stats.sort_stats(pstats.SortKey.TIME) # Use snakeviz to visualize the profile stats.dump_stats("spotdl.profile") else...
34,012
147,576
31
rllib/agents/trainer_config.py
10
3
def callbacks(self, callbacks_class) -> "TrainerConfig": self.callbacks_class = callbacks_c
[RLlib] POC: Config objects instead of dicts (PPO only). (#23491)
callbacks
2eaa54bd763ae0e63158ae0d939633c804394b78
ray
trainer_config.py
7
14
https://github.com/ray-project/ray.git
1
17
0
10
31
Python
{ "docstring": "Sets the callbacks configuration.\n\n Args:\n callbacks_class: Callbacks class, whose methods will be run during\n various phases of training and environment sample collection.\n See the `DefaultCallbacks` class and\n `examples/custom_metr...
def callbacks(self, callbacks_class) -> "TrainerConfig": self.callbacks_class = callbacks_class return self
18,249
87,194
184
tests/sentry/api/endpoints/test_project_details.py
30
16
def test_get_dynamic_sampling_after_migrating_to_new_plan_default_biases(self): self.project.update_option("sentry:dynamic_sampling", self.dynamic_sampling_data) with Feature( { self.universal_ds_flag: True, self.old_ds_flag: True, s...
feat(ds): Support new DS behaviour in project_details endpoint (#40387) Supports new adaptive dynamic sampling behaviour alongside the deprecated dynamic sampling behaviour and achieves that through feature flag differentiation This PR achieve that through the following: - Introducing a new `DynamicSamplingBiasS...
test_get_dynamic_sampling_after_migrating_to_new_plan_default_biases
5462ee11ad11ebb9a50323befcd286816d7898c8
sentry
test_project_details.py
12
14
https://github.com/getsentry/sentry.git
1
83
0
27
135
Python
{ "docstring": "\n Tests the case when an organization was in EA/LA and has setup previously Dynamic Sampling rules,\n and now they have migrated to an AM2 plan, but haven't manipulated the bias toggles yet so they get the\n default biases. This also ensures that they no longer receive the deprec...
def test_get_dynamic_sampling_after_migrating_to_new_plan_default_biases(self): self.project.update_option("sentry:dynamic_sampling", self.dynamic_sampling_data) with Feature( { self.universal_ds_flag: True, self.old_ds_flag: True, s...
12,450
61,225
77
.venv/lib/python3.8/site-packages/pip/_internal/utils/misc.py
38
4
def strtobool(val): # type: (str) -> int
upd; format
strtobool
f638f5d0e6c8ebed0e69a6584bc7f003ec646580
transferlearning
misc.py
12
8
https://github.com/jindongwang/transferlearning.git
3
59
0
34
117
Python
{ "docstring": "Convert a string representation of truth to true (1) or false (0).\n\n True values are 'y', 'yes', 't', 'true', 'on', and '1'; false values\n are 'n', 'no', 'f', 'false', 'off', and '0'. Raises ValueError if\n 'val' is anything else.\n ", "language": "en", "n_whitespaces": 52, "n_wo...
def strtobool(val): # type: (str) -> int val = val.lower() if val in ("y", "yes", "t", "true", "on", "1"): return 1 elif val in ("n", "no", "f", "false", "off", "0"): return 0 else: raise ValueError(f"invalid truth value {val!r}")
23,210
108,482
59
lib/matplotlib/artist.py
20
7
def convert_xunits(self, x): ax = getattr(self, 'axes', None) if ax is None or ax.xaxis is None:
Update artist.py (#23150)
convert_xunits
3df958c760dbde3a6c576fefa7827a136385b5c3
matplotlib
artist.py
9
5
https://github.com/matplotlib/matplotlib.git
3
40
0
17
65
Python
{ "docstring": "\n Convert *x* using the unit type of the xaxis.\n\n If the artist is not contained in an Axes or if the xaxis does not\n have units, *x* itself is returned.\n ", "language": "en", "n_whitespaces": 59, "n_words": 30, "vocab_size": 24 }
def convert_xunits(self, x): ax = getattr(self, 'axes', None) if ax is None or ax.xaxis is None: return x return ax.xaxis.convert_units(x)
53,206
212,222
204
bokeh/models/widgets/sliders.py
81
26
def value_as_datetime(self) -> tp.Tuple[datetime, datetime] | None:
Add DatetimeRangeSlider (#12034) * Add DatetimeRangeSlider * Add tests * Add docs
value_as_datetime
c9751009161f092b2e403d8cccccf5252c0dce1a
bokeh
sliders.py
11
16
https://github.com/bokeh/bokeh.git
4
87
0
49
267
Python
{ "docstring": " Convenience property to retrieve the value tuple as a tuple of\n datetime objects.\n \n Initial or selected range.\n \n Initial or selected value, throttled to report only on mouseup.\n \n The minimum allowable value.\n \n The maximum allowable value.\n \n The...
def value_as_datetime(self) -> tp.Tuple[datetime, datetime] | None: if self.value is None: return None v1, v2 = self.value if isinstance(v1, numbers.Number): d1 = datetime.utcfromtimestamp(v1 / 1000) else: d1 = v1 if isinstance(v2, num...
18,579
89,862
887
tests/sentry/receivers/test_onboarding.py
88
23
def test_first_event_with_minified_stack_trace_received(self, record_analytics): now = timezone.now() project = self.create_project(first_event=now) project_created.send(project=project, user=self.user, sender=type(project)) url = "http://localhost:3000" data = load_data...
ref(onboarding): Add function to record first event per project with min stack trace -(#42208)
test_first_event_with_minified_stack_trace_received
ce841204ef3b20d0f6ac812ebb06aebbc63547ac
sentry
test_onboarding.py
18
45
https://github.com/getsentry/sentry.git
1
198
0
70
339
Python
{ "docstring": "\n Test that an analytics event is recorded when\n a first event with minified stack trace is received\n ", "language": "en", "n_whitespaces": 39, "n_words": 17, "vocab_size": 15 }
def test_first_event_with_minified_stack_trace_received(self, record_analytics): now = timezone.now() project = self.create_project(first_event=now) project_created.send(project=project, user=self.user, sender=type(project)) url = "http://localhost:3000" data = load_data...
36,854
157,103
25
dask/array/backends.py
11
8
def arange(start, /, stop=None, step=1, *, dtype=None, meta=None, **kwargs): raise NotImplementedError
Backend library dispatching for IO in Dask-Array and Dask-DataFrame (#9475)
arange
c4d35f5515191409913827fd4faa3b69a3d7399a
dask
backends.py
6
2
https://github.com/dask/dask.git
1
31
0
11
46
Python
{ "docstring": "Create an ascending or descending array\n\n Returns evenly spaced values within the half-open interval\n ``[start, stop)`` as a one-dimensional array.\n ", "language": "en", "n_whitespaces": 41, "n_words": 20, "vocab_size": 20 }
def arange(start, /, stop=None, step=1, *, dtype=None, meta=None, **kwargs): raise NotImplementedError
32,341
141,365
35
python/ray/tune/checkpoint_manager.py
14
10
def best_checkpoints(self): checkpoints = sorted(self._top_persisted_checkpoints, key=lambda c: c.priority) return [wrappe
[tune/train] Consolidate checkpoint manager 3: Ray Tune (#24430) **Update**: This PR is now part 3 of a three PR group to consolidate the checkpoints. 1. Part 1 adds the common checkpoint management class #24771 2. Part 2 adds the integration for Ray Train #24772 3. This PR builds on #24772 and includes all chan...
best_checkpoints
8affbc7be6fdce169264b8db5b0276dbcc719f6d
ray
checkpoint_manager.py
11
3
https://github.com/ray-project/ray.git
2
33
0
14
53
Python
{ "docstring": "Returns best PERSISTENT checkpoints, sorted by score.", "language": "en", "n_whitespaces": 6, "n_words": 7, "vocab_size": 7 }
def best_checkpoints(self): checkpoints = sorted(self._top_persisted_checkpoints, key=lambda c: c.priority) return [wrapped.tracked_checkpoint for wrapped in checkpoints]
24,568
112,077
94
nni/runtime/config.py
44
15
def get_config_directory() -> Path: if os.getenv('NNI_CONFIG_DIR') is not None: config_dir = Path(os.getenv('NNI_CONFIG_DIR')) # type: ignore elif sys.prefix != sys.base_prefix or Path(sys.prefix, 'conda-meta').is_dir(): config_dir = Path(sys.prefix, 'nni') elif sys.platform == 'win32'...
Typehint and copyright header (#4669)
get_config_directory
5136a86d11a3602b283bad15098335fc6f005ae0
nni
config.py
13
15
https://github.com/microsoft/nni.git
5
106
0
34
186
Python
{ "docstring": "\n Get NNI config directory.\n Create it if not exist.\n ", "language": "en", "n_whitespaces": 19, "n_words": 9, "vocab_size": 9 }
def get_config_directory() -> Path: if os.getenv('NNI_CONFIG_DIR') is not None: config_dir = Path(os.getenv('NNI_CONFIG_DIR')) # type: ignore elif sys.prefix != sys.base_prefix or Path(sys.prefix, 'conda-meta').is_dir(): config_dir = Path(sys.prefix, 'nni') elif sys.platform == 'win32'...
71,748
247,570
124
tests/storage/test_background_update.py
30
8
def test_background_update_min_batch_set_in_config(self): # a very long-running individual update duration_ms = 50 self.get_success( self.store.db_pool.
Add config settings for background update parameters (#11980)
test_background_update_min_batch_set_in_config
ef3619e61d84493d98470eb2a69131d15eb1166b
synapse
test_background_update.py
13
19
https://github.com/matrix-org/synapse.git
1
103
0
24
71
Python
{ "docstring": "\n Test that the minimum batch size set in the config is used\n ", "language": "en", "n_whitespaces": 27, "n_words": 12, "vocab_size": 11 }
def test_background_update_min_batch_set_in_config(self): # a very long-running individual update duration_ms = 50 self.get_success( self.store.db_pool.simple_insert( "background_updates", values={"update_name": "test_update", "progress_json"...
45,962
188,999
114
psutil/_pswindows.py
79
18
def swap_memory(): mem = cext.virtual_mem() total_phys = mem[0] free_phys = mem[1] total_system = mem[2] free_system = mem[3] # Despite the name PageFile refers to total system
Fix typos
swap_memory
471b19d2aa799cd73bded23379e864dd35bec2b6
psutil
_pswindows.py
9
11
https://github.com/giampaolo/psutil.git
1
85
0
53
142
Python
{ "docstring": "Swap system memory as a (total, used, free, sin, sout) tuple.", "language": "en", "n_whitespaces": 10, "n_words": 11, "vocab_size": 11 }
def swap_memory(): mem = cext.virtual_mem() total_phys = mem[0] free_phys = mem[1] total_system = mem[2] free_system = mem[3] # Despite the name PageFile refers to total system memory here # thus physical memory values need to be subtracted to get swap values total = total_system ...
13,832
65,243
9
erpnext/accounts/report/general_ledger/general_ledger.py
17
9
def get_supplier_invoice_details(): inv_details = {} for d in frappe.db.sql( , as_dict=1, ): inv_details[d.name] = d.bill_no return inv_details
style: format code with black
get_supplier_invoice_details
494bd9ef78313436f0424b918f200dab8fc7c20b
erpnext
general_ledger.py
10
9
https://github.com/frappe/erpnext.git
2
37
0
15
59
Python
{ "docstring": " select name, bill_no from `tabPurchase Invoice`\n\t\twhere docstatus = 1 and bill_no is not null and bill_no != '' ", "language": "en", "n_whitespaces": 19, "n_words": 19, "vocab_size": 16 }
def get_supplier_invoice_details(): inv_details = {} for d in frappe.db.sql( , as_dict=1, ): inv_details[d.name] = d.bill_no return inv_details
52,665
209,387
58
scapy/contrib/dce_rpc.py
23
3
def dce_rpc_endianess(pkt): if pkt.endianness == 0: # big endian return ">" elif pkt.endianness == 1: # little endian return "<"
Add SPDX License identifiers (#3655) * Add SPDX License identifiers * Relicense `ldp.py` with author consent See https://github.com/secdev/scapy/issues/3478 * Apply guedou suggestions * Relicense someim under GPL2 * DCE/RPC licensing
dce_rpc_endianess
9420c2229bf5330c2cc580f114f63f920a68db10
scapy
dce_rpc.py
9
7
https://github.com/secdev/scapy.git
3
28
0
17
56
Python
{ "docstring": "Determine the right endianness sign for a given DCE/RPC packet", "language": "en", "n_whitespaces": 9, "n_words": 10, "vocab_size": 10 }
def dce_rpc_endianess(pkt): if pkt.endianness == 0: # big endian return ">" elif pkt.endianness == 1: # little endian return "<" else: return "!"
78,298
266,117
79
netbox/utilities/utils.py
30
16
def deserialize_object(model, fields, pk=None): content_type = ContentType.objects.get_fo
Closes #10851: New staging mechanism (#10890) * WIP * Convert checkout() context manager to a class * Misc cleanup * Drop unique constraint from Change model * Extend staging tests * Misc cleanup * Incorporate M2M changes * Don't cancel wipe out creation records when an object is deleted * Rena...
deserialize_object
a5308ea28e851a4ddb65a4e7ca2297b641e5891f
netbox
utils.py
12
11
https://github.com/netbox-community/netbox.git
2
83
0
25
144
Python
{ "docstring": "\n Instantiate an object from the given model and field data. Functions as\n the complement to serialize_object().\n ", "language": "en", "n_whitespaces": 26, "n_words": 16, "vocab_size": 15 }
def deserialize_object(model, fields, pk=None): content_type = ContentType.objects.get_for_model(model) if 'custom_fields' in fields: fields['custom_field_data'] = fields.pop('custom_fields') data = { 'model': '.'.join(content_type.natural_key()), 'pk': pk, 'fields': fie...
108,848
310,161
45
tests/test_setup.py
23
12
async def test_component_not_installed_if_requirement_fails(hass): hass.config.skip_pip = False mock_integration(hass, MockModule("comp", requirements=["package==0.0.1"])) with patch("homeassistant.util.package.install_package", return_value=False): assert not await setup.async_
Make setup tests async (#64456) Co-authored-by: Franck Nijhof <git@frenck.dev>
test_component_not_installed_if_requirement_fails
7d85c00b91cd989dfead3246a65eb297d27e935b
core
test_setup.py
12
6
https://github.com/home-assistant/core.git
1
61
0
21
108
Python
{ "docstring": "Component setup should fail if requirement can't install.", "language": "en", "n_whitespaces": 7, "n_words": 8, "vocab_size": 8 }
async def test_component_not_installed_if_requirement_fails(hass): hass.config.skip_pip = False mock_integration(hass, MockModule("comp", requirements=["package==0.0.1"])) with patch("homeassistant.util.package.install_package", return_value=False): assert not await setup.async_setup_component...
40,078
167,694
174
pandas/core/config_init.py
105
52
def use_numba_cb(key) -> None: from pandas.core.util import numba_ numba_.set_use_numba(cf.get_option(key)) with cf.config_prefix("compute"): cf.register_option( "use_bottleneck", True, use_bottleneck_doc, validator=is_bool, cb=use_bottleneck_cb, ) cf.regis...
TYP: return values in core/*.py (#47587) * TYP: return values in core/*.py * fix test * to_html * to_html part 2 * DataFrame.query * more overloads * fix query? * increase stacklevel by one * fix rename_axis * and an overload for DataFrame.eval * address comments * fix typevar
use_numba_cb
9612375ca28ade056f15d4338f1bfde5d045c9fc
pandas
config_init.py
9
3
https://github.com/pandas-dev/pandas.git
1
26
0
64
372
Python
{ "docstring": "\n: int\n Floating point output precision in terms of number of places after the\n decimal, for regular formatting as well as scientific notation. Similar\n to ``precision`` in :meth:`numpy.set_printoptions`.\n\n: int\n Default space for DataFrame columns.\n\n: int\n If max_rows is exce...
def use_numba_cb(key) -> None: from pandas.core.util import numba_ numba_.set_use_numba(cf.get_option(key)) with cf.config_prefix("compute"): cf.register_option( "use_bottleneck", True, use_bottleneck_doc, validator=is_bool, cb=use_bottleneck_cb, ) cf.regis...
72,393
248,638
295
tests/rest/media/v1/test_html_preview.py
55
8
def test_twitter_tag(self) -> None: html = b tree = decode_body(html, "http://example.com/test.html") og = parse_html_to_open_graph(tree) self.assertEqual( og, { "og:title": None, "og:description": "Description", ...
Improve URL previews for sites with only Twitter card information. (#13056) Pull out `twitter:` meta tags when generating a preview and use it to augment any `og:` meta tags. Prefers Open Graph information over Twitter card information.
test_twitter_tag
0fcc0ae37c959116c910f349a8025bd6921fdfc8
synapse
test_html_preview.py
10
38
https://github.com/matrix-org/synapse.git
1
88
0
34
159
Python
{ "docstring": "Twitter card tags should be used if nothing else is available.\n <html>\n <meta name=\"twitter:card\" content=\"summary\">\n <meta name=\"twitter:description\" content=\"Description\">\n <meta name=\"twitter:site\" content=\"@matrixdotorg\">\n </html>\n \n ...
def test_twitter_tag(self) -> None: html = b tree = decode_body(html, "http://example.com/test.html") og = parse_html_to_open_graph(tree) self.assertEqual( og, { "og:title": None, "og:description": "Description", ...
14,812
68,528
108
erpnext/accounts/doctype/tax_rule/tax_rule.py
159
39
def get_tax_template(posting_date, args): args = frappe._dict(args) conditions = [] if posting_date: conditions.append( f ) else: conditions.appen
refactor: tax rule validity query (#30934)
get_tax_template
05dd1d6d15c6c8c66165e9f267078c3cf9aec10e
erpnext
tax_rule.py
18
51
https://github.com/frappe/erpnext.git
15
312
0
103
559
Python
{ "docstring": "Get matching tax rule(from_date is null or from_date <= '{posting_date}')\n\t\t\tand (to_date is null or to_date >= '{posting_date}')select * from `tabTax Rule`\n\t\twhere {0}", "language": "en", "n_whitespaces": 21, "n_words": 24, "vocab_size": 21 }
def get_tax_template(posting_date, args): args = frappe._dict(args) conditions = [] if posting_date: conditions.append( f ) else: conditions.append("(from_date is null) and (to_date is null)") conditions.append( "ifnull(tax_category, '') = {0}".format(frappe.db.escape(cstr(args.get("tax_category")))...
42,010
176,628
97
networkx/generators/classic.py
40
16
def wheel_graph(n, create_using=None): _, nodes = n G = empty_graph(nodes, create_using) if G.is_directed(): raise
Adjust the usage of nodes_or_number decorator (#5599) * recorrect typo in decorators.py * Update tests to show troubles in current code * fix troubles with usage of nodes_or_number * fix typo * remove nodes_or_number where that makes sense * Reinclude nodes_or_numbers and add some tests for nonstandard ...
wheel_graph
de1d00f20e0bc14f1cc911b3486e50225a8fa168
networkx
classic.py
14
11
https://github.com/networkx/networkx.git
5
86
0
32
139
Python
{ "docstring": "Return the wheel graph\n\n The wheel graph consists of a hub node connected to a cycle of (n-1) nodes.\n\n Parameters\n ----------\n n : int or iterable\n If an integer, node labels are 0 to n with center 0.\n If an iterable of nodes, the center is the first.\n create_usin...
def wheel_graph(n, create_using=None): _, nodes = n G = empty_graph(nodes, create_using) if G.is_directed(): raise NetworkXError("Directed Graph not supported") if len(nodes) > 1: hub, *rim = nodes G.add_edges_from((hub, node) for node in rim) if len(rim) > 1: ...
45,974
189,036
199
scripts/internal/print_announce.py
70
18
def get_changes(): with open(HISTORY) as f: lines = f.readlines() block = [] # eliminate the part preceding the first block for i, line in enumerate(lines): line = lines.pop(0) if line.startswith('===='): break lines.pop(0) for i, line in enumerate(lin...
fix print_announce.py
get_changes
c14744db097b1955f2b668dc753b2d2439db0bdf
psutil
print_announce.py
13
21
https://github.com/giampaolo/psutil.git
7
151
0
44
260
Python
{ "docstring": "Get the most recent changes for this release by parsing\n HISTORY.rst file.\n ", "language": "en", "n_whitespaces": 18, "n_words": 12, "vocab_size": 12 }
def get_changes(): with open(HISTORY) as f: lines = f.readlines() block = [] # eliminate the part preceding the first block for i, line in enumerate(lines): line = lines.pop(0) if line.startswith('===='): break lines.pop(0) for i, line in enumerate(lin...
20,182
100,727
102
lib/gui/popup_session.py
28
13
def _check_valid_data(self) -> bool: logger.debug("Validating data. %s", {key: len(val) for key, val in self._display_data.stats.items()}) if any(len(val) == 0 # pylint:disable=len-as-condition for val in self._display_data.stats.values()):
Bugfixes: - Stats graph - Handle NaNs in data - logger - de-elevate matplotlib font messages
_check_valid_data
afec52309326304f4323029039e49bfcf928ef43
faceswap
popup_session.py
13
15
https://github.com/deepfakes/faceswap.git
4
64
0
24
105
Python
{ "docstring": " Check that the selections holds valid data to display\n NB: len-as-condition is used as data could be a list or a numpy array\n\n Returns\n -------\n bool\n ``True` if there is data to be displayed, otherwise ``False``\n ", "language": "en", "n_wh...
def _check_valid_data(self) -> bool: logger.debug("Validating data. %s", {key: len(val) for key, val in self._display_data.stats.items()}) if any(len(val) == 0 # pylint:disable=len-as-condition for val in self._display_data.stats.values()): retur...
90,441
291,332
538
homeassistant/components/ibeacon/coordinator.py
144
22
def _async_check_unavailable_groups_with_random_macs(self) -> None: now = MONOTONIC_TIME() gone_unavailable = [ group_id for group_id in self._group_ids_random_macs if group_id not in self._unavailable_group_ids and (service_info := self._last_see...
Fix iBeacons with infrequent random mac address changes unexpectedly going unavailable (#82668) fixes https://github.com/home-assistant/core/issues/79781
_async_check_unavailable_groups_with_random_macs
09c3df7eb258295211a8216c2039843b09aa244b
core
coordinator.py
17
20
https://github.com/home-assistant/core.git
7
100
0
92
166
Python
{ "docstring": "Check for random mac groups that have not been seen in a while and mark them as unavailable.", "language": "en", "n_whitespaces": 17, "n_words": 18, "vocab_size": 18 }
def _async_check_unavailable_groups_with_random_macs(self) -> None: now = MONOTONIC_TIME() gone_unavailable = [ group_id for group_id in self._group_ids_random_macs if group_id not in self._unavailable_group_ids and (service_info := self._last_see...
69,645
241,673
275
pytorch_lightning/trainer/connectors/checkpoint_connector.py
76
9
def restore_optimizers_and_schedulers(self) -> None: if not self._loaded_checkpoint: return if self.trainer.strategy.lightning_restore_optimizer: # validation if "optimizer_states" not in self._loaded_checkpoint:
Fix restoring lr scheduler states with deepspeed strategy (#11322) Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com> Co-authored-by: thomas chaton <thomas@grid.ai>
restore_optimizers_and_schedulers
9c8f52ccd1a1859502f705e0567f2d83d57ff93a
lightning
checkpoint_connector.py
13
17
https://github.com/Lightning-AI/lightning.git
5
62
0
42
117
Python
{ "docstring": "Restores the optimizers and learning rate scheduler states from the pre-loaded checkpoint.", "language": "en", "n_whitespaces": 11, "n_words": 12, "vocab_size": 11 }
def restore_optimizers_and_schedulers(self) -> None: if not self._loaded_checkpoint: return if self.trainer.strategy.lightning_restore_optimizer: # validation if "optimizer_states" not in self._loaded_checkpoint: raise KeyError( ...
4,210
22,138
57
pipenv/patched/pip/_vendor/requests/utils.py
32
11
def urldefragauth(url): scheme, netloc, path, params, query, fragment = urlparse(url) # see func:`prepend_scheme_if_needed` if not netloc: netloc, path = pat
Rename notpip to pip. Vendor in pip-22.2.1 and latest requirementslib and vistir.
urldefragauth
cd5a9683be69c86c8f3adcd13385a9bc5db198ec
pipenv
utils.py
10
6
https://github.com/pypa/pipenv.git
2
64
0
23
99
Python
{ "docstring": "\n Given a url remove the fragment and the authentication part.\n\n :rtype: str\n ", "language": "en", "n_whitespaces": 22, "n_words": 12, "vocab_size": 11 }
def urldefragauth(url): scheme, netloc, path, params, query, fragment = urlparse(url) # see func:`prepend_scheme_if_needed` if not netloc: netloc, path = path, netloc netloc = netloc.rsplit("@", 1)[-1] return urlunparse((scheme, netloc, path, params, query, ""))
73,004
249,582
68
tests/storage/test_registration.py
19
12
def test_approval_not_required(self) -> None: self.get_success(self.store.register_user(self.user_id, self.pwhash)) user = self.get_success(self.store.get_user_by_id(self.user_id)) assert user is not None self.assertTrue(user["approved"]) approved = self.get_success(se...
Allow admins to require a manual approval process before new accounts can be used (using MSC3866) (#13556)
test_approval_not_required
be76cd8200b18f3c68b895f85ac7ef5b0ddc2466
synapse
test_registration.py
11
10
https://github.com/matrix-org/synapse.git
1
81
0
17
132
Python
{ "docstring": "Tests that if we don't require approval for new accounts, newly created\n accounts are automatically marked as approved.\n ", "language": "en", "n_whitespaces": 32, "n_words": 18, "vocab_size": 18 }
def test_approval_not_required(self) -> None: self.get_success(self.store.register_user(self.user_id, self.pwhash)) user = self.get_success(self.store.get_user_by_id(self.user_id)) assert user is not None self.assertTrue(user["approved"]) approved = self.get_success(se...
57,068
223,791
107
python3.10.4/Lib/email/message.py
28
12
def get_all(self, name, failobj=None): valu
add python 3.10.4 for windows
get_all
8198943edd73a363c266633e1aa5b2a9e9c9f526
XX-Net
message.py
14
9
https://github.com/XX-net/XX-Net.git
4
64
0
24
103
Python
{ "docstring": "Return a list of all the values for the named field.\n\n These will be sorted in the order they appeared in the original\n message, and may contain duplicates. Any fields deleted and\n re-inserted are always appended to the header list.\n\n If no such fields exist, failobj...
def get_all(self, name, failobj=None): values = [] name = name.lower() for k, v in self._headers: if k.lower() == name: values.append(self.policy.header_fetch_parse(k, v)) if not values: return failobj return values
15,828
72,102
105
wagtail/admin/tests/test_privacy.py
31
14
def test_explorer_private_child(self): response = self.client.get( reverse("wagtailadmin_explore", args=(self.private_child_page.id,)) ) # Check the response self.assertEqual(response.status_code, 200) # Check the privacy indicator is public self.a
Reformat with black
test_explorer_private_child
d10f15e55806c6944827d801cd9c2d53f5da4186
wagtail
test_privacy.py
14
8
https://github.com/wagtail/wagtail.git
1
64
0
25
110
Python
{ "docstring": "\n This tests that the privacy indicator on the private child pages explore view is set to \"PRIVATE\"\n ", "language": "en", "n_whitespaces": 32, "n_words": 17, "vocab_size": 16 }
def test_explorer_private_child(self): response = self.client.get( reverse("wagtailadmin_explore", args=(self.private_child_page.id,)) ) # Check the response self.assertEqual(response.status_code, 200) # Check the privacy indicator is public self.as...
120,535
334,167
40
utils/check_dummies.py
18
10
def find_backend(line): if _re_test_backend.search(line) is None: return No
upload some cleaning tools
find_backend
95f4256fc905b6e29e5ea0f245dcf88f72a9ddd1
diffusers
check_dummies.py
10
6
https://github.com/huggingface/diffusers.git
3
47
0
17
79
Python
{ "docstring": "Find one (or multiple) backend in a code line of the init.", "language": "en", "n_whitespaces": 11, "n_words": 12, "vocab_size": 12 }
def find_backend(line): if _re_test_backend.search(line) is None: return None backends = [b[0] for b in _re_backend.findall(line)] backends.sort() return "_and_".join(backends)
42,246
177,039
70
networkx/classes/graphviews.py
36
18
def subgraph_view(G, filter_node=no_filter, filter_edge=no_filter): newG = nx.freeze(G.__class__()) newG._NODE_OK
Attempt to reverse slowdown from hasattr needed for cached_property (#5836) * Automate reset of cache for _adj,_pred,_succ * Make G._adj a data descriptor that resets G.adj when needed. * update places in the code where both G._succ and G._adj are changed This is no longer needed since G._succ and G._adj are...
subgraph_view
2fb00bb8b9ed1e2917e5bc1aac04c558bd23c6d8
networkx
graphviews.py
10
19
https://github.com/networkx/networkx.git
3
132
0
29
114
Python
{ "docstring": "View of `G` applying a filter on nodes and edges.\n\n `subgraph_view` provides a read-only view of the input graph that excludes\n nodes and edges based on the outcome of two filter functions `filter_node`\n and `filter_edge`.\n\n The `filter_node` function takes one argument --- the node ...
def subgraph_view(G, filter_node=no_filter, filter_edge=no_filter): newG = nx.freeze(G.__class__()) newG._NODE_OK = filter_node newG._EDGE_OK = filter_edge # create view by assigning attributes from G newG._graph = G newG.graph = G.graph newG._node = FilterAtlas(G._node, filter_node) ...
7,451
41,875
153
seaborn/utils.py
47
9
def _deprecate_ci(errorbar, ci): if ci != "deprecated": if ci is None: errorbar = None elif ci == "sd": errorbar = "sd" else: errorbar = ("ci", ci)
Housekeeping on relational plot parameters (#2855) * Do some housekeeping on lineplot ci deprecation * Remove some unused parameters from scatterplot * Remove incorrect statement from relplot docstring * Update lineplot ci= deprecation test
_deprecate_ci
26bf4b3b645edc405ca52b533b8d68273aeba7d1
seaborn
utils.py
14
14
https://github.com/mwaskom/seaborn.git
4
59
0
37
117
Python
{ "docstring": "\n Warn on usage of ci= and convert to appropriate errorbar= arg.\n\n ci was deprecated when errorbar was added in 0.12. It should not be removed\n completely for some time, but it can be moved out of function definitions\n (and extracted from kwargs) after one cycle.\n\n ", "language...
def _deprecate_ci(errorbar, ci): if ci != "deprecated": if ci is None: errorbar = None elif ci == "sd": errorbar = "sd" else: errorbar = ("ci", ci) msg = ( "\n\nThe `ci` parameter is deprecated. " f"Use `errorbar={repr(...
38,770
160,870
42
numpy/ma/core.py
10
7
def __sub__(self, other): if self._delegate_binop(other):
ENH: Adding __array_ufunc__ capability to MaskedArrays. This enables any ufunc numpy operations that are called on a MaskedArray to use the masked version of that function automatically without needing to resort to np.ma.func() calls.
__sub__
6d77c591c59b5678f14ae5af2127eebb7d2415bc
numpy
core.py
7
4
https://github.com/numpy/numpy.git
2
27
0
9
44
Python
{ "docstring": "\n Subtract other from self, and return a new masked array.\n\n ", "language": "en", "n_whitespaces": 25, "n_words": 10, "vocab_size": 10 }
def __sub__(self, other): if self._delegate_binop(other): return NotImplemented return np.subtract(self, other)
@pytest.fixture
87,135
287,952
146
tests/components/plugwise/conftest.py
51
21
def mock_smile_adam_2() -> Generator[None, MagicMock, None]: chosen_env = "m_adam_heating" with patch( "homeassistant.components.plugwise.gateway.Smile", autospec=True ) as smile_mock: smile = smile_mock.return_value smile.gateway_id = "da224107914542988a88561b4
Bump plugwise to v0.21.3, add related new features (#76610) Co-authored-by: Franck Nijhof <frenck@frenck.nl>
mock_smile_adam_2
2667f0b792b1f936aeb5958cc40d5dee26350bf6
core
conftest.py
11
17
https://github.com/home-assistant/core.git
1
95
1
39
180
Python
{ "docstring": "Create a 2nd Mock Adam environment for testing exceptions.", "language": "en", "n_whitespaces": 8, "n_words": 9, "vocab_size": 9 }
def mock_smile_adam_2() -> Generator[None, MagicMock, None]: chosen_env = "m_adam_heating" with patch( "homeassistant.components.plugwise.gateway.Smile", autospec=True ) as smile_mock: smile = smile_mock.return_value smile.gateway_id = "da224107914542988a88561b4452b0f6" ...
22,518
106,941
1,017
lib/mpl_toolkits/mplot3d/axes3d.py
393
52
def plot_wireframe(self, X, Y, Z, **kwargs): had_data = self.has_data() if Z.ndim != 2: raise ValueError("Argument Z must be 2-dimensional.") # FIXME: Support masked arrays X, Y, Z = np.broadcast_arrays(X, Y, Z) rows, cols = Z.shape has_stride = 'rs...
Remove *args deprecations
plot_wireframe
6ef6b37fc2113c041f7d2643d70b553ec335d597
matplotlib
axes3d.py
19
54
https://github.com/matplotlib/matplotlib.git
30
539
0
193
846
Python
{ "docstring": "\n Plot a 3D wireframe.\n\n .. note::\n\n The *rcount* and *ccount* kwargs, which both default to 50,\n determine the maximum number of samples used in each direction. If\n the input data is larger, it will be downsampled (by slicing) to\n these n...
def plot_wireframe(self, X, Y, Z, **kwargs): had_data = self.has_data() if Z.ndim != 2: raise ValueError("Argument Z must be 2-dimensional.") # FIXME: Support masked arrays X, Y, Z = np.broadcast_arrays(X, Y, Z) rows, cols = Z.shape has_stride = 'rs...
50,774
204,534
372
django/core/handlers/base.py
97
14
def check_response(self, response, callback, name=None): if not (response is None or asyncio.iscoroutine(response)): return if not name: if isinstance(callback, types.FunctionType): # FBV
Refs #33476 -- Reformatted code with Black.
check_response
9c19aff7c7561e3a82978a272ecdaad40dda5c00
django
base.py
15
22
https://github.com/django/django.git
7
105
0
63
181
Python
{ "docstring": "\n Raise an error if the view returned None or an uncalled coroutine.\n ", "language": "en", "n_whitespaces": 27, "n_words": 12, "vocab_size": 11 }
def check_response(self, response, callback, name=None): if not (response is None or asyncio.iscoroutine(response)): return if not name: if isinstance(callback, types.FunctionType): # FBV name = "The view %s.%s" % (callback.__module__, callback.__name__)...
36,894
157,247
630
dask/dataframe/io/io.py
234
39
def _meta_from_array(x, columns=None, index=None, meta=None): if x.ndim > 2: raise ValueError( "from_array does not input more than 2D array, got" " array with shape %r" % (x.shape,) ) if index is not None: if not isinstance(index, Index): raise...
Support `cupy.ndarray` to `cudf.DataFrame` dispatching in `dask.dataframe` (#9579)
_meta_from_array
0d8e12be4c2261b3457978c16aba7e893b1cf4a1
dask
io.py
18
47
https://github.com/dask/dask.git
21
397
0
136
656
Python
{ "docstring": "Create empty DataFrame or Series which has correct dtype", "language": "en", "n_whitespaces": 8, "n_words": 9, "vocab_size": 9 }
def _meta_from_array(x, columns=None, index=None, meta=None): if x.ndim > 2: raise ValueError( "from_array does not input more than 2D array, got" " array with shape %r" % (x.shape,) ) if index is not None: if not isinstance(index, Index): raise...
16,070
73,615
82
wagtail/contrib/typed_table_block/blocks.py
16
9
def rows(self): for row in self.row_data: yield [ column["block"].bind(value) for column,
Reformat with black
rows
d10f15e55806c6944827d801cd9c2d53f5da4186
wagtail
blocks.py
14
6
https://github.com/wagtail/wagtail.git
3
41
0
14
68
Python
{ "docstring": "\n Iterate over the rows of the table, with each row returned as a list of BoundBlocks\n ", "language": "en", "n_whitespaces": 31, "n_words": 16, "vocab_size": 14 }
def rows(self): for row in self.row_data: yield [ column["block"].bind(value) for column, value in zip(self.columns, row["values"]) ]
48,104
196,686
18
sympy/stats/crv_types.py
15
6
def Uniform(name, left, right): r return rv(name, UniformDistribution, (left, right)) #--------------------------------------------------------
Documentation cleanup 5
Uniform
9ad8ab9fe58051cf11626ba6654852fcfec60147
sympy
crv_types.py
8
60
https://github.com/sympy/sympy.git
1
24
0
15
36
Python
{ "docstring": "\n Create a continuous random variable with a uniform distribution.\n\n Explanation\n ===========\n\n The density of the uniform distribution is given by\n\n .. math::\n f(x) := \\begin{cases}\n \\frac{1}{b - a} & \\text{for } x \\in [a,b] \\\\\n ...
def Uniform(name, left, right): r return rv(name, UniformDistribution, (left, right)) #------------------------------------------------------------------------------- # UniformSum distribution ------------------------------------------------------
56,953
223,527
71
python3.10.4/Lib/email/_header_value_parser.py
29
12
def get_ttext(value): m = _non_token_end_matcher(value) if not m: raise errors.HeaderParseError( "expected ttext but found '{}'".format(value)) ttext
add python 3.10.4 for windows
get_ttext
8198943edd73a363c266633e1aa5b2a9e9c9f526
XX-Net
_header_value_parser.py
12
10
https://github.com/XX-net/XX-Net.git
2
61
0
23
106
Python
{ "docstring": "ttext = <matches _ttext_matcher>\n\n We allow any non-TOKEN_ENDS in ttext, but add defects to the token's\n defects list if we find non-ttext characters. We also register defects for\n *any* non-printables even though the RFC doesn't exclude all of them,\n because we follow the spirit of ...
def get_ttext(value): m = _non_token_end_matcher(value) if not m: raise errors.HeaderParseError( "expected ttext but found '{}'".format(value)) ttext = m.group() value = value[len(ttext):] ttext = ValueTerminal(ttext, 'ttext') _validate_xtext(ttext) return ttext, val...
47,945
196,497
84
sympy/codegen/ast.py
34
9
def kwargs(self, exclude=(), apply=None): kwargs = {k: getattr(self, k) for k in self._fields if k not in exclude}
Fixed issues with __slots__ (overlaps and omission in base classes) Across several modules, two types of slot problems were detected. 1) Overlaps A class redefines slots already present in a superclass. This reduces the memory savings from slots, as well as potentially introduces unpredictable behavior. 2) ...
kwargs
338775324184a00c6bf50b8339ebd805c2bf4879
sympy
ast.py
11
17
https://github.com/sympy/sympy.git
5
67
0
25
103
Python
{ "docstring": " Get instance's attributes as dict of keyword arguments.\n\n Parameters\n ==========\n\n exclude : collection of str\n Collection of keywords to exclude.\n\n apply : callable, optional\n Function to apply to all values.\n ", "language": "en", ...
def kwargs(self, exclude=(), apply=None): kwargs = {k: getattr(self, k) for k in self._fields if k not in exclude} if apply is not None: return {k: apply(v) for k, v in kwargs.items()} else: return kwargs
31,883
140,165
22
python/ray/serve/deployment_executor_node.py
8
6
def _execute_impl(self, *args, **kwargs) -> RayServeHandle: return self._deployment_handle
[Serve][Deployment Graph][Perf] Add minimal executor DAGNode (#24754) closes #24475 Current deployment graph has big perf issues compare with using plain deployment handle, mostly because overhead of DAGNode traversal mechanism. We need this mechanism to empower DAG API, specially deeply nested objects in args wher...
_execute_impl
f27e85cd7df5ca2873ef6231200a1530e16ac35d
ray
deployment_executor_node.py
6
6
https://github.com/ray-project/ray.git
1
18
0
8
30
Python
{ "docstring": "Does not call into anything or produce a new value, as the time\n this function gets called, all child nodes are already resolved to\n ObjectRefs.\n ", "language": "en", "n_whitespaces": 46, "n_words": 25, "vocab_size": 25 }
def _execute_impl(self, *args, **kwargs) -> RayServeHandle: return self._deployment_handle
16,040
73,523
87
wagtail/contrib/settings/tests/test_templates.py
23
9
def test_settings_use_default_site(self): context = {} # This should use the default site template = '{{ settings("tests.testsetting", use_default_site=True).title}}' self.assertEqual( self.render(template, context, request_co
Reformat with black
test_settings_use_default_site
d10f15e55806c6944827d801cd9c2d53f5da4186
wagtail
test_templates.py
10
7
https://github.com/wagtail/wagtail.git
1
37
0
22
62
Python
{ "docstring": "\n Check that the {{ settings(use_default_site=True) }} option works with\n no site in the context\n ", "language": "en", "n_whitespaces": 36, "n_words": 14, "vocab_size": 13 }
def test_settings_use_default_site(self): context = {} # This should use the default site template = '{{ settings("tests.testsetting", use_default_site=True).title}}' self.assertEqual( self.render(template, context, request_context=False), self.default_...
43,365
181,571
40
tests/test_ffmpeg_reader.py
22
6
def test_stream_square_brackets_and_language(): infos = d = FFmpegInfosParser(infos, "clip.mp4").parse() assert d assert len(d["inputs"][0]["streams"]) == 2 assert d["inputs"][0]["streams"][0]["language"] == "eng" assert d["inputs"][0]["streams"][1]["language"] is None
Handle brackets and language in FFMPEG output (#1837) * Improve regex to handle brackets and language * Update CHANGELOG.md * Simplify `if`
test_stream_square_brackets_and_language
1393889d5bc29c8b7c4ed45bca4736d6dfdfad8d
moviepy
test_ffmpeg_reader.py
12
12
https://github.com/Zulko/moviepy.git
1
75
0
16
132
Python
{ "docstring": "\nInput #0, mpeg, from 'clip.mp4':\n Duration: 00:02:15.00, start: 52874.498178, bitrate: 266 kb/s\n Stream #0:0[0x1e0](eng): Video: ..., 25 tbr, 90k tbn, 50 tbc\n Stream #0:1[0x1c0](und): Audio: mp2, 0 channels, s16p\nAt least one output file must be specified", "language": "en", "n_whites...
def test_stream_square_brackets_and_language(): infos = d = FFmpegInfosParser(infos, "clip.mp4").parse() assert d assert len(d["inputs"][0]["streams"]) == 2 assert d["inputs"][0]["streams"][0]["language"] == "eng" assert d["inputs"][0]["streams"][1]["language"] is None
11,991
60,126
80
src/prefect/_internal/concurrency/primitives.py
19
8
async def wait(self) -> None:
Add thread-safe async primitives `Event` and `Future` (#7865) Co-authored-by: Serina Grill <42048900+serinamarie@users.noreply.github.com>
wait
a368874d1b145c1ec5201e5efd3c26ce7c1e8611
prefect
primitives.py
10
12
https://github.com/PrefectHQ/prefect.git
3
44
0
17
78
Python
{ "docstring": "\n Wait until the flag has been set.\n\n If the flag has already been set when this method is called, it returns immediately.\n ", "language": "en", "n_whitespaces": 44, "n_words": 22, "vocab_size": 18 }
async def wait(self) -> None: if self._is_set: return if not self._loop: self._loop = get_running_loop() self._event = asyncio.Event() await self._event.wait()
11,516
56,385
570
src/prefect/agent.py
134
42
async def get_and_submit_flow_runs(self) -> List[FlowRun]: if not self.started: raise RuntimeError("Agent is not started. Use `async with OrionAgent()...`") self.logger.debug("Checking for flow runs...") before = pendulum.now("utc").add( seconds=self.prefetch_s...
Add message to indicate a work queue is paused The agent now checks if the work queue is paused when it does not find any submittable runs. We may want to reduce the frequency of this API call in the future, but it seems reasonable as a starting point.
get_and_submit_flow_runs
78825acff7ee179ddb1e98da6efa6d39e4e3d1bf
prefect
agent.py
14
41
https://github.com/PrefectHQ/prefect.git
11
202
0
90
362
Python
{ "docstring": "\n The principle method on agents. Queries for scheduled flow runs and submits\n them for execution in parallel.\n ", "language": "en", "n_whitespaces": 39, "n_words": 17, "vocab_size": 16 }
async def get_and_submit_flow_runs(self) -> List[FlowRun]: if not self.started: raise RuntimeError("Agent is not started. Use `async with OrionAgent()...`") self.logger.debug("Checking for flow runs...") before = pendulum.now("utc").add( seconds=self.prefetch_s...
81,709
276,718
120
keras/utils/conv_utils.py
68
8
def conv_output_length(input_length, filter_size, padding, stride, dilation=1): if input_lengt
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
conv_output_length
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
conv_utils.py
11
12
https://github.com/keras-team/keras.git
5
95
0
39
160
Python
{ "docstring": "Determines output length of a convolution given input length.\n\n Args:\n input_length: integer.\n filter_size: integer.\n padding: one of \"same\", \"valid\", \"full\", \"causal\"\n stride: integer.\n dilation: dilation rate, integer.\n\n Returns:\n The...
def conv_output_length(input_length, filter_size, padding, stride, dilation=1): if input_length is None: return None assert padding in {"same", "valid", "full", "causal"} dilated_filter_size = filter_size + (filter_size - 1) * (dilation - 1) if padding in ["same", "causal"]: output_...
77,807
264,784
28
netbox/dcim/models/cables.py
7
9
def get_split_nodes(self): rearport = path_node_to_object(self._nodes[-1]) return FrontPort.objects.filter(rear_port=rearp
Migrate CablePath to use two-dimensional array
get_split_nodes
82706eb3a68e963d7ac089478788b87892d4ee79
netbox
cables.py
10
3
https://github.com/netbox-community/netbox.git
1
29
0
7
49
Python
{ "docstring": "\n Return all available next segments in a split cable path.\n ", "language": "en", "n_whitespaces": 25, "n_words": 10, "vocab_size": 10 }
def get_split_nodes(self): rearport = path_node_to_object(self._nodes[-1]) return FrontPort.objects.filter(rear_port=rearport)
51,781
206,882
391
django/views/generic/list.py
113
16
def get_template_names(self): try: names = super().get_template_names() except ImproperlyConfigured: # If template_name isn't specified, it's not a problem -- # we just start with an empty list. names = [] # If the list is a queryset, we'...
Refs #33476 -- Reformatted code with Black.
get_template_names
9c19aff7c7561e3a82978a272ecdaad40dda5c00
django
list.py
15
20
https://github.com/django/django.git
4
86
0
85
155
Python
{ "docstring": "\n Return a list of template names to be used for the request. Must return\n a list. May not be called if render_to_response is overridden.\n ", "language": "en", "n_whitespaces": 46, "n_words": 24, "vocab_size": 22 }
def get_template_names(self): try: names = super().get_template_names() except ImproperlyConfigured: # If template_name isn't specified, it's not a problem -- # we just start with an empty list. names = [] # If the list is a queryset, we'...
45,965
189,007
352
scripts/internal/fix_flake8.py
112
26
def remove_lines(fname, entries): to_remove = [] for entry in entries: msg, issue, lineno, pos, descr = entry # 'module imported but not used' if issue == 'F401' and handle_f401(fname, lineno): to_remove.append(lineno) # 'blank line(s) at end of file' eli...
Fix typos
remove_lines
471b19d2aa799cd73bded23379e864dd35bec2b6
psutil
fix_flake8.py
16
25
https://github.com/giampaolo/psutil.git
11
185
0
80
310
Python
{ "docstring": "Check if we should remove lines, then do it.\n Return the number of lines removed.\n ", "language": "en", "n_whitespaces": 21, "n_words": 15, "vocab_size": 15 }
def remove_lines(fname, entries): to_remove = [] for entry in entries: msg, issue, lineno, pos, descr = entry # 'module imported but not used' if issue == 'F401' and handle_f401(fname, lineno): to_remove.append(lineno) # 'blank line(s) at end of file' eli...
24,770
112,855
217
nni/algorithms/hpo/bohb_advisor/bohb_advisor.py
39
12
def _get_one_trial_job(self): if not self.generated_hyper_configs: ret = { 'parameter_id': '-1_0_0', 'parameter_source': 'algorithm', 'parameters': '' } self.send(CommandType.NoMoreTrialJobs, nni.dump(ret))
Support multiple HPO experiments in one process (#4855)
_get_one_trial_job
98c1a77f61900d486f46d284c49fb65675dbee6a
nni
bohb_advisor.py
11
18
https://github.com/microsoft/nni.git
2
95
0
26
164
Python
{ "docstring": "get one trial job, i.e., one hyperparameter configuration.\n\n If this function is called, Command will be sent by BOHB:\n a. If there is a parameter need to run, will return \"NewTrialJob\" with a dict:\n {\n 'parameter_id': id of new hyperparameter\n 'param...
def _get_one_trial_job(self): if not self.generated_hyper_configs: ret = { 'parameter_id': '-1_0_0', 'parameter_source': 'algorithm', 'parameters': '' } self.send(CommandType.NoMoreTrialJobs, nni.dump(ret)) ...
27,125
122,221
77
jax/experimental/pjit.py
39
28
def global_array_to_host_local_array(global_inputs, global_mesh, pspecs): def _convert(arr, pspec):
Add `host_local_array_to_global_array` and `global_array_to_host_local_array` for enabling transition to jax.Array. Also support `FROM_GDA` for `jax.Array` as a backwards compatible change so that users can continue to use that until they transition to jax.Array. Its currently required because of usage like `in_axis_r...
global_array_to_host_local_array
4da72cf3988b4918f65b1401e46c40b7c4504963
jax
pjit.py
12
7
https://github.com/google/jax.git
1
54
0
34
150
Python
{ "docstring": "Converts a global `jax.Array` to a host local `jax.Array`.\n\n You can use this function to transition to `jax.Array`. Using `jax.Array` with\n `pjit` has the same semantics of using GDA with pjit i.e. all `jax.Array`\n inputs to pjit should be globally shaped and the output from `pjit` will also\n...
def global_array_to_host_local_array(global_inputs, global_mesh, pspecs): def _convert(arr, pspec): local_aval = global_mesh._global_to_local( pxla._get_array_mapping(pspec), arr.aval) return array.ArrayImpl( local_aval, MeshPspecSharding(global_mesh.local_mesh, pspec), arr._arrays,...
53,642
213,099
29
samtranslator/utils/py27hash_fix.py
8
8
def __setitem__(self, key, value):
fix: Py27hash fix (#2182) * Add third party py27hash code * Add Py27UniStr and unit tests * Add py27hash_fix utils and tests * Add to_py27_compatible_template and tests * Apply py27hash fix to wherever it is needed * Apply py27hash fix, all tests pass except api_with_any_method_in_swagger * apply py2...
__setitem__
a5db070f446b7cfebdaa6ad2e3dcf78f6105a272
serverless-application-model
py27hash_fix.py
9
3
https://github.com/aws/serverless-application-model.git
1
31
0
8
49
Python
{ "docstring": "\n Override of __setitem__ to track keys and simulate Python2.7 dict\n\n Parameters\n ----------\n key: hashable\n value: Any\n ", "language": "en", "n_whitespaces": 59, "n_words": 16, "vocab_size": 16 }
def __setitem__(self, key, value): super(Py27Dict, self).__setitem__(key, value) self.keylist.add(key)
3,172
20,004
75
pipenv/patched/notpip/_internal/utils/virtualenv.py
43
6
def virtualenv_no_global() -> bool: # PEP 405 compliance needs to be checked firs
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for p...
virtualenv_no_global
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
virtualenv.py
9
7
https://github.com/pypa/pipenv.git
3
27
0
35
52
Python
{ "docstring": "Returns a boolean, whether running in venv with no system site-packages.", "language": "en", "n_whitespaces": 10, "n_words": 11, "vocab_size": 11 }
def virtualenv_no_global() -> bool: # PEP 405 compliance needs to be checked first since virtualenv >=20 would # return True for both checks, but is only able to use the PEP 405 config. if _running_under_venv(): return _no_global_under_venv() if _running_under_regular_virtualenv(): ...
16,409
75,478
95
wagtail/search/backends/database/mysql/mysql.py
22
13
def autocomplete(self): texts = [] for field in self.search_fields: for current_field, value in self.prepare_field(self.obj, field): if isinstance(current_field,
Reformat with black
autocomplete
d10f15e55806c6944827d801cd9c2d53f5da4186
wagtail
mysql.py
14
7
https://github.com/wagtail/wagtail.git
4
56
0
20
91
Python
{ "docstring": "\n Returns all values to index as \"autocomplete\". This is the value of all AutocompleteFields\n ", "language": "en", "n_whitespaces": 29, "n_words": 14, "vocab_size": 13 }
def autocomplete(self): texts = [] for field in self.search_fields: for current_field, value in self.prepare_field(self.obj, field): if isinstance(current_field, AutocompleteField): texts.append((value)) return " ".join(texts)
80,856
271,833
25
keras/engine/training_utils.py
9
5
def list_to_tuple(maybe_list): if isinstance(maybe_list, list): return tuple
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
list_to_tuple
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
training_utils.py
9
4
https://github.com/keras-team/keras.git
2
21
0
8
36
Python
{ "docstring": "Datasets will stack the list of tensor, so switch them to tuples.", "language": "en", "n_whitespaces": 11, "n_words": 12, "vocab_size": 12 }
def list_to_tuple(maybe_list): if isinstance(maybe_list, list): return tuple(maybe_list) return maybe_list
8,100
43,926
22
tests/models/test_taskinstance.py
8
6
def test_not_recorded_for_unused(self, dag_maker, xcom_value):
Add TaskMap and TaskInstance.map_id (#20286) Co-authored-by: Ash Berlin-Taylor <ash_github@firemirror.com>
test_not_recorded_for_unused
d48a3a357fd89ec805d086d5b6c1f1d4daf77b9a
airflow
test_taskinstance.py
12
8
https://github.com/apache/airflow.git
1
63
0
8
38
Python
{ "docstring": "A value not used for task-mapping should not be recorded.", "language": "en", "n_whitespaces": 9, "n_words": 10, "vocab_size": 9 }
def test_not_recorded_for_unused(self, dag_maker, xcom_value): with dag_maker(dag_id="test_not_recorded_for_unused") as dag:
@keras_export( "keras.metrics.mean_absolute_error", "keras.metrics.mae", "keras.metrics.MAE", "keras.losses.mean_absolute_error", "keras.losses.mae", "keras.losses.MAE", ) @tf.__internal__.dispatch.add_dispatch_support
81,233
274,555
37
keras/losses.py
16
10
def _ragged_tensor_mse(y_true, y_pred): return _ragged_tensor_apply_loss(mean_squared_error, y_true, y_pred) @keras_export( "keras.metrics.mean_abso
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
_ragged_tensor_mse
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
losses.py
7
2
https://github.com/keras-team/keras.git
1
17
1
16
71
Python
{ "docstring": "Implements support for handling RaggedTensors.\n\n Args:\n y_true: RaggedTensor truth values. shape = `[batch_size, d0, .. dN]`.\n y_pred: RaggedTensor predicted values. shape = `[batch_size, d0, .. dN]`.\n\n Returns:\n Mean squared error values. shape = `[batch_size, d0, .. dN-1]...
def _ragged_tensor_mse(y_true, y_pred): return _ragged_tensor_apply_loss(mean_squared_error, y_true, y_pred) @keras_export( "keras.metrics.mean_absolute_error", "keras.metrics.mae", "keras.metrics.MAE", "keras.losses.mean_absolute_error", "keras.losses.mae", "keras.losses.MAE", ) @tf....
46,496
191,358
32
tests/unit_tests/test_formatting.py
16
9
def test_does_not_allow_extra_kwargs() -> None: template = "This is a {foo} test." with pytest.raises(KeyError): formatter.for
initial commit
test_does_not_allow_extra_kwargs
18aeb720126a68201c7e3b5a617139c27c779496
langchain
test_formatting.py
11
5
https://github.com/hwchase17/langchain.git
1
32
0
16
61
Python
{ "docstring": "Test formatting does not allow extra key word arguments.", "language": "en", "n_whitespaces": 8, "n_words": 9, "vocab_size": 9 }
def test_does_not_allow_extra_kwargs() -> None: template = "This is a {foo} test." with pytest.raises(KeyError): formatter.format(template, foo="good", bar="oops")
75,342
258,632
107
sklearn/neighbors/_lof.py
33
19
def score_samples(self, X): check_is_fitted(self) X = check_array(X, accept_sparse="csr") distances_X, neighbors_indices_X = self.kneighbors( X, n_neighbors=self.n_neighbors_ ) X_lrd = self._local_reachability_density(distances_X, neighbors_indices_X) ...
DOC improve LOF documentation wrt difference of predict and fit_predict (#21878) * improve LOF documentation * Update sklearn/neighbors/_lof.py Co-authored-by: Alexandre Gramfort <alexandre.gramfort@m4x.org> Co-authored-by: Alexandre Gramfort <alexandre.gramfort@m4x.org>
score_samples
0dfaaadfe2d0e0b4fd9d2ba22a75b7b1b1903049
scikit-learn
_lof.py
10
9
https://github.com/scikit-learn/scikit-learn.git
1
77
0
30
122
Python
{ "docstring": "Opposite of the Local Outlier Factor of X.\n\n It is the opposite as bigger is better, i.e. large values correspond\n to inliers.\n\n **Only available for novelty detection (when novelty is set to True).**\n The argument X is supposed to contain *new data*: if X contains a\...
def score_samples(self, X): check_is_fitted(self) X = check_array(X, accept_sparse="csr") distances_X, neighbors_indices_X = self.kneighbors( X, n_neighbors=self.n_neighbors_ ) X_lrd = self._local_reachability_density(distances_X, neighbors_indices_X) ...
14,380
66,920
44
erpnext/payroll/doctype/payroll_period/payroll_period.py
63
18
def get_payroll_period_days(start_date, end_date, employee, company=None): if not company: company = frappe.db.get_value("Employee", employee, "company") payroll_period = frappe.db.sql( , {"company": company, "st
style: format code with black
get_payroll_period_days
494bd9ef78313436f0424b918f200dab8fc7c20b
erpnext
payroll_period.py
16
26
https://github.com/frappe/erpnext.git
4
165
0
48
256
Python
{ "docstring": "\n\t\tselect name, start_date, end_date\n\t\tfrom `tabPayroll Period`\n\t\twhere\n\t\t\tcompany=%(company)s\n\t\t\tand %(start_date)s between start_date and end_date\n\t\t\tand %(end_date)s between start_date and end_date\n\t", "language": "en", "n_whitespaces": 15, "n_words": 21, "vocab_size"...
def get_payroll_period_days(start_date, end_date, employee, company=None): if not company: company = frappe.db.get_value("Employee", employee, "company") payroll_period = frappe.db.sql( , {"company": company, "start_date": start_date, "end_date": end_date}, ) if len(payroll_period) > 0: actual_no_of_days =...
42,041
176,699
120
networkx/algorithms/bipartite/basic.py
52
12
def density(B, nodes): n = len(B) m = nx.number_of_edges(B) nb = len(nodes) nt = n - nb if m == 0: # includes cases n==0 and n==1 d = 0.0 else: if B.is_directed():
Remove redundant py2 numeric conversions (#5661) * Remove redundant float conversion * Remove redundant int conversion * Use integer division Co-authored-by: Miroslav Šedivý <6774676+eumiro@users.noreply.github.com>
density
2a05ccdb07cff88e56661dee8a9271859354027f
networkx
basic.py
15
13
https://github.com/networkx/networkx.git
3
76
0
31
124
Python
{ "docstring": "Returns density of bipartite graph B.\n\n Parameters\n ----------\n B : NetworkX graph\n\n nodes: list or container\n Nodes in one node set of the bipartite graph.\n\n Returns\n -------\n d : float\n The bipartite density\n\n Examples\n --------\n >>> from netw...
def density(B, nodes): n = len(B) m = nx.number_of_edges(B) nb = len(nodes) nt = n - nb if m == 0: # includes cases n==0 and n==1 d = 0.0 else: if B.is_directed(): d = m / (2 * nb * nt) else: d = m / (nb * nt) return d
8,956
46,701
118
airflow/www/views.py
27
9
def redirect_or_json(origin, msg, status=""): if request.headers.get('Accept') == 'application/json': return {'status': status, 'message': msg} else: if status: flash(msg, status) else: flash(msg) return redirect(origin) ############################...
Add details drawer to Grid View (#22123) * make UI and tree work with mapped tasks basic slide drawer reformat grid background colors improve rendering and add selected dag run fix hover and extra prop switch from drawer to details section add tooltip info to details use API make side panel col...
redirect_or_json
2bb26a38070a4b949bfb210ef1d5644e016e373a
airflow
views.py
13
9
https://github.com/apache/airflow.git
3
56
0
23
103
Python
{ "docstring": "\n Some endpoints are called by javascript,\n returning json will allow us to more elegantly handle side-effects in-page\n ", "language": "en", "n_whitespaces": 27, "n_words": 17, "vocab_size": 17 }
def redirect_or_json(origin, msg, status=""): if request.headers.get('Accept') == 'application/json': return {'status': status, 'message': msg} else: if status: flash(msg, status) else: flash(msg) return redirect(origin) ############################...
75,649
259,212
203
sklearn/preprocessing/_encoders.py
62
12
def _map_drop_idx_to_infrequent(self, feature_idx, drop_idx): if not self._infrequent_enabled: return drop_idx default_to_infrequent = self._default_to_infrequent_mappings[feature_idx] if default_to_infrequent is None: return drop_idx # Raise error when...
ENH Adds infrequent categories to OneHotEncoder (#16018) * ENH Completely adds infrequent categories * STY Linting * STY Linting * DOC Improves wording * DOC Lint * BUG Fixes * CLN Address comments * CLN Address comments * DOC Uses math to description float min_frequency * DOC Adds comment r...
_map_drop_idx_to_infrequent
7f0006c8aad1a09621ad19c3db19c3ff0555a183
scikit-learn
_encoders.py
13
14
https://github.com/scikit-learn/scikit-learn.git
5
72
0
47
127
Python
{ "docstring": "Convert `drop_idx` into the index for infrequent categories.\n\n If there are no infrequent categories, then `drop_idx` is\n returned. This method is called in `_compute_drop_idx` when the `drop`\n parameter is an array-like.\n ", "language": "en", "n_whitespaces": 59, ...
def _map_drop_idx_to_infrequent(self, feature_idx, drop_idx): if not self._infrequent_enabled: return drop_idx default_to_infrequent = self._default_to_infrequent_mappings[feature_idx] if default_to_infrequent is None: return drop_idx # Raise error when...
29,076
130,020
57
dashboard/tests/test_dashboard.py
29
13
def test_dashboard_module_decorator(enable_test_module): head_cls_list = dashboard_utils.get_all_modules(dashboard_utils.DashboardHeadModule) agent_cls_list = dashboard_utils.get_all_modules( dashboard_utils.DashboardAgentModule ) assert any(cls.__name__ == "TestHead" for cls in head_cls_list) ...
[CI] Format Python code with Black (#21975) See #21316 and #21311 for the motivation behind these changes.
test_dashboard_module_decorator
7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065
ray
test_dashboard.py
9
23
https://github.com/ray-project/ray.git
3
58
0
21
97
Python
{ "docstring": "\nimport os\nimport ray.dashboard.utils as dashboard_utils\n\nos.environ.pop(\"RAY_DASHBOARD_MODULE_TEST\")\nhead_cls_list = dashboard_utils.get_all_modules(\n dashboard_utils.DashboardHeadModule)\nagent_cls_list = dashboard_utils.get_all_modules(\n dashboard_utils.DashboardAgentModule)\...
def test_dashboard_module_decorator(enable_test_module): head_cls_list = dashboard_utils.get_all_modules(dashboard_utils.DashboardHeadModule) agent_cls_list = dashboard_utils.get_all_modules( dashboard_utils.DashboardAgentModule ) assert any(cls.__name__ == "TestHead" for cls in head_cls_list) ...
57,102
223,844
21
python3.10.4/Lib/email/parser.py
7
6
def parsestr(self, text, headersonly=False): return self.parse(StringIO(text), headersonly=headers
add python 3.10.4 for windows
parsestr
8198943edd73a363c266633e1aa5b2a9e9c9f526
XX-Net
parser.py
9
2
https://github.com/XX-net/XX-Net.git
1
26
0
7
41
Python
{ "docstring": "Create a message structure from a string.\n\n Returns the root of the message structure. Optional headersonly is a\n flag specifying whether to stop parsing after reading the headers or\n not. The default is False, meaning it parses the entire contents of\n the file.\n ...
def parsestr(self, text, headersonly=False): return self.parse(StringIO(text), headersonly=headersonly)
78,323
266,162
69
netbox/netbox/views/generic/utils.py
19
9
def get_prerequisite_model(queryset): if not queryset.exists(): for prereq in getattr(queryset.model, 'prerequisite_models', []): model = apps.get_model(prereq) if not model.objects.exists(): return model
Use strings to specify prerequisite models
get_prerequisite_model
ebf555e1fb1267348ca620c15ce456767d91042a
netbox
utils.py
13
6
https://github.com/netbox-community/netbox.git
4
49
0
16
83
Python
{ "docstring": "\n Return any prerequisite model that must be created prior to creating\n an instance of the current model.\n ", "language": "en", "n_whitespaces": 27, "n_words": 17, "vocab_size": 17 }
def get_prerequisite_model(queryset): if not queryset.exists(): for prereq in getattr(queryset.model, 'prerequisite_models', []): model = apps.get_model(prereq) if not model.objects.exists(): return model
81,435
275,623
303
keras/optimizers/optimizer_v2/utils.py
95
15
def filter_empty_gradients(grads_and_vars): grads_and_vars = tuple(grads_and_vars) if not grads_and_vars: return grads_and_vars filtered = [] vars_with_empty_grads = [] for grad, var in grads_and_vars: if grad is None: vars_with_empty_grads.append(var) else:...
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
filter_empty_gradients
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
utils.py
13
28
https://github.com/keras-team/keras.git
8
118
0
69
203
Python
{ "docstring": "Filter out `(grad, var)` pairs that have a gradient equal to `None`.", "language": "en", "n_whitespaces": 11, "n_words": 12, "vocab_size": 12 }
def filter_empty_gradients(grads_and_vars): grads_and_vars = tuple(grads_and_vars) if not grads_and_vars: return grads_and_vars filtered = [] vars_with_empty_grads = [] for grad, var in grads_and_vars: if grad is None: vars_with_empty_grads.append(var) else:...
49,486
200,000
439
sympy/physics/wigner.py
231
37
def real_gaunt(l_1, l_2, l_3, m_1, m_2, m_3, prec=None): r l_1, l_2, l_3, m_1, m_2, m_3 = [ as_int(i) for i in (l_1, l_2, l_3, m_1, m_2, m_3)] # check for quick exits if sum(1 for i in (m_1, m_2, m_3) if i < 0) % 2: return S.Zero # odd number of negative m if (l_1 + l_2 + l_3) % 2:...
Update wigner.py
real_gaunt
f8aedc2fa7434091fc83ff241298534f79047c60
sympy
wigner.py
16
142
https://github.com/sympy/sympy.git
15
424
0
124
623
Python
{ "docstring": "\n Calculate the real Gaunt coefficient.\n\n Explanation\n ===========\n The real Gaunt coefficient is defined as the integral over three\n real spherical harmonics:\n \n .. math::\n \\begin{aligned}\n \\operatorname{RealGaunt}(l_1,l_2,l_3,m_1,m_2,m_3)\n &=\\i...
def real_gaunt(l_1, l_2, l_3, m_1, m_2, m_3, prec=None): r l_1, l_2, l_3, m_1, m_2, m_3 = [ as_int(i) for i in (l_1, l_2, l_3, m_1, m_2, m_3)] # check for quick exits if sum(1 for i in (m_1, m_2, m_3) if i < 0) % 2: return S.Zero # odd number of negative m if (l_1 + l_2 + l_3) % 2:...