PHPFixing
  • Privacy Policy
  • TOS
  • Ask Question
  • Contact Us
  • Home
  • PHP
  • Programming
  • SQL Injection
  • Web3.0
Showing posts with label data-structures. Show all posts
Showing posts with label data-structures. Show all posts

Friday, December 16, 2022

[FIXED] What does C++ struct syntax "a : b" mean

 December 16, 2022     c++, data-structures, struct, syntax     No comments   

Issue

If I have a C++ struct, defining a 64bit data word such as..

struct SMyDataWord
{
    int Name : 40;
    int Colour : 24;
};

What does the : 40 syntax mean... does it mean that the first 40 bits are reserved for the Name and the remaining 24 bits for the Colour?

This is how it appears to be being used, but I've not come across it before.


Solution

Bitfields, carried over from C. Name is 40 bits wide, Colour is 24 bits wide. Your struct therefore has at least 64 bits. On my system 64 bits would be 8 bytes.



Answered By - dirkgently
Answer Checked By - Mildred Charles (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Tuesday, November 22, 2022

[FIXED] How does one create custom filter conditions for array items upon ever newly computed query-data?

 November 22, 2022     arrays, data-structures, filtering, javascript, multiple-conditions     No comments   

Issue

I have a filter object that is returned by query params

url = /all?channels=calls,text&calls=voicemail,missed

const query = {
  channels: 'calls,texts',
  calls: 'voicemail,missed',
};

I then have an array of objects that come in from a socket.

const arr = [
  {
    id: 1,
    channel: 'SMS',
    sent: '2021-08-22T03:21:18.41650+0000',
    sender: {
      contactType: 'business',
    },
    recipients: [
      {
        contactType: 'corporate',
      },
    ],
    direction: 'INBOUND',
  },
  {
    id: 2,
    channel: 'VOICE',
    sent: '2021-08-20T23:15:56.00000+0000',
    sender: {
      contactType: 'business',
    },
    recipients: [
      {
        contactType: 'corporate',
      },
    ],
    callDetails: {
      answered: false,
      voicemail: true,
    },
    direction: 'INBOUND',
  },
  {
    id: 3,
    channel: 'VOICE',
    sent: '2021-08-20T23:15:56.00000+0000',
    sender: {
      contactType: 'business',
    },
    recipients: [
      {
        contactType: 'corporate',
      },
    ],
    callDetails: {
      answered: true,
      voicemail: false,
    },
    direction: 'INBOUND',
  },
  {
    id: 4,
    channel: 'VOICE',
    sent: '2021-08-20T23:15:56.00000+0000',
    sender: {
      contactType: 'business',
    },
    recipients: [
      {
        contactType: 'corporate',
      },
    ],
    callDetails: {
      answered: false,
      voicemail: false,
    },
    direction: 'INBOUND',
  },
];

I want to filter out the objects that match the filters but the query obj isn't friendly enough to just map the arr through.

With the query obj shared above, i should return the objects id:1 and id:2 and id:4 from arr, since those object meet the criteria of sms, voicemail, & missed

I assume i need a modified query obj that has to have various conditions available for each property, i.e calls: voicemail === callDetails.voicemail === true or calls: received === callDetails.answered === true

I've seen lots of examples on how to filter an array of objects with multiple match-criteria, but with the req of the property having multiple conditions, i've hit a wall.

thanks for the help


Solution

The main idea is to provide kind of a rosetta stone which does bridge/map the query specific syntax with any list item's specific data structure. Thus one will end up writing a map which takes a query's structure into account but ensures for each necessary query endpoint an item specific filter condition/function.

The query function should simply filter the item list by applying a list of logical OR conditions, thus using some for returning the boolean filter value.

Which leaves one of implementing a helper method which collects ... via Object.entries and Array.prototype.flatMap as well as via String.prototype.split and Array.prototype.map ... the function endpoints from the above introduced requirements configuration/map, based on the query object, provided by the system. Thus this helper might be named resolveQuery.

const sampleList = [{
  id: 1,
  channel: 'SMS',

  direction: 'INBOUND',
}, {
  id: 2,
  channel: 'VOICE',

  callDetails: {
    answered: false,
    voicemail: true,
  },
  direction: 'INBOUND',
}, {
  id: 3,
  channel: 'VOICE',

  callDetails: {
    answered: true,
    voicemail: false,
  },
  direction: 'INBOUND',
}, {
  id: 4,
  channel: 'VOICE',

  callDetails: {
    answered: false,
    voicemail: false,
  },
  direction: 'INBOUND',
}];

// prepare a `requirements` map which ...
// - on one hand maps `query`-syntax to a list items's structure
// - and on the other hand does so by providing an item specific
//   filter condition/function for each necessary query endpoint.
const requirements = {
  channels: {
    texts: item => item.channel === 'SMS',
  },
  calls: {
    voicemail: item => item.channel === 'VOICE' && !!item.callDetails.voicemail,
    missed: item => item.channel === 'VOICE' && !item.callDetails.answered,
  },
}
// const query = {
//   channels: 'calls,texts',
//   calls: 'voicemail,missed',
// };

function resolveQuery(requirements, query) {
  const reject = item => false;

  // create/collect a list of filter condition/functions
  // which later will be applied as logical OR via `some`.
  return Object

    .entries(query)
    .flatMap(([ groupKey, groupValue ]) =>
      // e.g groupKey => 'channels',
      // groupValue => 'calls,texts'
      groupValue
        .split(',')
        .map(requirementKey =>
          // e.g requirementKey => 'calls'
          // or requirementKey => 'texts'
          requirements?.[groupKey]?.[requirementKey?.trim()] ?? reject
        )
    );
}

function queryFromItemList(itemList, requirements, query) {
  const conditionList = resolveQuery(requirements, query);

  console.log(
    'conditionList ... [\n ',
    conditionList.join(',\n  '),
    '\n]'
  );

  return itemList.filter(item =>
    conditionList.some(condition => condition(item))
  );
}

console.log(
  queryFromItemList(sampleList, requirements, {
    channels: 'calls,texts',
    calls: 'voicemail,missed',
  })
);
.as-console-wrapper { min-height: 100%!important; top: 0; }



Answered By - Peter Seliger
Answer Checked By - Timothy Miller (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Monday, November 7, 2022

[FIXED] How to write a python3 function that merges two lists of sets so that the output list has sets of elements that were present in both input sets?

 November 07, 2022     data-structures, merge, python, set, time-complexity     No comments   

Issue

I have two lists of sets, let's say: [{1, 2, 3}, {4, 5}, {6, 7}] and [{1, 2}, {3, 4}, {5, 6, 7}]

No set in the list has the same element and the sum of all sets in both lists is the same. The function should check if the sets in both lists had the same elements. If there were some differences, put them in another set.

So above example should return: [{1, 2}, {3}, {4}, {5}, {6, 7}]

I work on large sets so I would need this function to be as effective as possible.

Here is the example code and how I would like it to work:

def mergeSets(x, y):
    out = set()
    for i in x:
        out = out.union(i)
        # this allows me to get the set of all elements but here where my mind stops working
        # the problem sounds simple but thinking hours I can not think of good algorythm for this       issue :(
        # I found set.intersection() function but it works on single sets only, not lists of sets
    return out


x = mergeSets([{1, 2, 3}, {4, 5}, {6, 7}], [{1, 2}, {3, 4}, {5, 6, 7}])
print(x)
# [{1, 2}, {3}, {4}, {5}, {6, 7}]
x = mergeSets([{1, 2}, {3, 4, 5, 6, 7}, {8}], [{1}, {2, 3, 4}, {5, 6, 7, 8}])
print(x)
# [{1}, {2}, {3, 4}, {5, 6, 7}, {8}]

EDIT: the data doesn't have to be sorted and may be even of different types than integer

EDIT2: the input lists don't have to be sorted so sets may appear in random order


Solution

Given that each value occurs in exactly two sets (one per input list), you could collect the index pairs for each value, where an index pair gives an indication in which two sets (at which indexes in both lists) the value occurs.

Unique pairs indicate unique sets in the output, so a dictionary of such pairs can serve to populate the result:

from collections import defaultdict

def merge_sets(lista, listb):
    index_in_a = {
        val: idx
        for idx, elem in enumerate(lista) for val in elem
    }
    set_by_key = defaultdict(set)
    for idx, elem in enumerate(listb):
        for val in elem:
            set_by_key[(index_in_a[val], idx)].add(val)
    return list(set_by_key.values())

This looks O(n) to me.

NB: since the order of iteration over a set is not defined, the order of the output can look a bit mingled, but I assume the order of where sets appear in the output is not significant.



Answered By - trincot
Answer Checked By - Dawn Plyler (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Thursday, November 3, 2022

[FIXED] How to correctly use a map of sets with custom comparator in C++?

 November 03, 2022     c++, c++17, data-structures, lambda     No comments   

Issue

I want to use a map of sets, with the set having a custom comparator. Here is my code:

using item = pair<string, int>;
auto comp = [](item const& p1, item const& p2) {return p1.second < p2.second;};

class TimeMap {
    unordered_map<string, decltype(set<item, decltype(comp)>(comp))> data;
    
public:
    TimeMap() {
        data.clear();
    }
    
    void set(string key, string value, int timestamp) {
        data[key].insert({value, timestamp});
    }
};

This yields the following compile error:

In file included from prog_joined.cpp:1:
In file included from ./precompiled/headers.h:50:
In file included from /usr/bin/../lib/gcc/x86_64-linux-gnu/9/../../../../include/c++/9/map:60:
/usr/bin/../lib/gcc/x86_64-linux-gnu/9/../../../../include/c++/9/bits/stl_tree.h:149:9: error: no matching constructor for initialization of '(lambda at prog_joined.cpp:7:13)'
      : _M_key_compare()
        ^
/usr/bin/../lib/gcc/x86_64-linux-gnu/9/../../../../include/c++/9/bits/stl_tree.h:680:4: note: in instantiation of member function 'std::_Rb_tree_key_compare<(lambda at prog_joined.cpp:7:13)>::_Rb_tree_key_compare' requested here
          _Rb_tree_impl()
          ^
/usr/bin/../lib/gcc/x86_64-linux-gnu/9/../../../../include/c++/9/bits/stl_tree.h:939:7: note: in instantiation of member function 'std::_Rb_tree<std::pair<std::__cxx11::basic_string<char>, int>, std::pair<std::__cxx11::basic_string<char>, int>, std::_Identity<std::pair<std::__cxx11::basic_string<char>, int>>, (lambda at prog_joined.cpp:7:13), std::allocator<std::pair<std::__cxx11::basic_string<char>, int>>>::_Rb_tree_impl<(lambda at prog_joined.cpp:7:13), false>::_Rb_tree_impl' requested here
      _Rb_tree() = default;
      ^
/usr/bin/../lib/gcc/x86_64-linux-gnu/9/../../../../include/c++/9/tuple:1661:9: note: in instantiation of function template specialization 'std::pair<const std::__cxx11::basic_string<char>, std::set<std::pair<std::__cxx11::basic_string<char>, int>, (lambda at prog_joined.cpp:7:13), std::allocator<std::pair<std::__cxx11::basic_string<char>, int>>>>::pair<const std::__cxx11::basic_string<char> &, 0>' requested here
      : pair(__first, __second,
        ^
/usr/bin/../lib/gcc/x86_64-linux-gnu/9/../../../../include/c++/9/ext/new_allocator.h:147:23: note: in instantiation of function template specialization 'std::pair<const std::__cxx11::basic_string<char>, std::set<std::pair<std::__cxx11::basic_string<char>, int>, (lambda at prog_joined.cpp:7:13), std::allocator<std::pair<std::__cxx11::basic_string<char>, int>>>>::pair<const std::__cxx11::basic_string<char> &>' requested here
        { ::new((void *)__p) _Up(std::forward<_Args>(__args)...); }
                             ^
/usr/bin/../lib/gcc/x86_64-linux-gnu/9/../../../../include/c++/9/bits/alloc_traits.h:484:8: note: in instantiation of function template specialization '__gnu_cxx::new_allocator<std::__detail::_Hash_node<std::pair<const std::__cxx11::basic_string<char>, std::set<std::pair<std::__cxx11::basic_string<char>, int>, (lambda at prog_joined.cpp:7:13), std::allocator<std::pair<std::__cxx11::basic_string<char>, int>>>>, true>>::construct<std::pair<const std::__cxx11::basic_string<char>, std::set<std::pair<std::__cxx11::basic_string<char>, int>, (lambda at prog_joined.cpp:7:13), std::allocator<std::pair<std::__cxx11::basic_string<char>, int>>>>, const std::piecewise_construct_t &, std::tuple<const std::__cxx11::basic_string<char> &>, std::tuple<>>' requested here
        { __a.construct(__p, std::forward<_Args>(__args)...); }
              ^
/usr/bin/../lib/gcc/x86_64-linux-gnu/9/../../../../include/c++/9/bits/hashtable_policy.h:2086:27: note: in instantiation of function template specialization 'std::allocator_traits<std::allocator<std::__detail::_Hash_node<std::pair<const std::__cxx11::basic_string<char>, std::set<std::pair<std::__cxx11::basic_string<char>, int>, (lambda at prog_joined.cpp:7:13), std::allocator<std::pair<std::__cxx11::basic_string<char>, int>>>>, true>>>::construct<std::pair<const std::__cxx11::basic_string<char>, std::set<std::pair<std::__cxx11::basic_string<char>, int>, (lambda at prog_joined.cpp:7:13), std::allocator<std::pair<std::__cxx11::basic_string<char>, int>>>>, const std::piecewise_construct_t &, std::tuple<const std::__cxx11::basic_string<char> &>, std::tuple<>>' requested here
            __node_alloc_traits::construct(_M_node_allocator(),
                                 ^
/usr/bin/../lib/gcc/x86_64-linux-gnu/9/../../../../include/c++/9/bits/hashtable_policy.h:701:15: note: in instantiation of function template specialization 'std::__detail::_Hashtable_alloc<std::allocator<std::__detail::_Hash_node<std::pair<const std::__cxx11::basic_string<char>, std::set<std::pair<std::__cxx11::basic_string<char>, int>, (lambda at prog_joined.cpp:7:13), std::allocator<std::pair<std::__cxx11::basic_string<char>, int>>>>, true>>>::_M_allocate_node<const std::piecewise_construct_t &, std::tuple<const std::__cxx11::basic_string<char> &>, std::tuple<>>' requested here
          __p = __h->_M_allocate_node(std::piecewise_construct,
                     ^
/usr/bin/../lib/gcc/x86_64-linux-gnu/9/../../../../include/c++/9/bits/unordered_map.h:985:16: note: in instantiation of member function 'std::__detail::_Map_base<std::__cxx11::basic_string<char>, std::pair<const std::__cxx11::basic_string<char>, std::set<std::pair<std::__cxx11::basic_string<char>, int>, (lambda at prog_joined.cpp:7:13), std::allocator<std::pair<std::__cxx11::basic_string<char>, int>>>>, std::allocator<std::pair<const std::__cxx11::basic_string<char>, std::set<std::pair<std::__cxx11::basic_string<char>, int>, (lambda at prog_joined.cpp:7:13), std::allocator<std::pair<std::__cxx11::basic_string<char>, int>>>>>, std::__detail::_Select1st, std::equal_to<std::__cxx11::basic_string<char>>, std::hash<std::string>, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<true, false, true>, true>::operator[]' requested here
      { return _M_h[__k]; }
               ^
Line 13: Char 13: note: in instantiation of member function 'std::unordered_map<std::__cxx11::basic_string<char>, std::set<std::pair<std::__cxx11::basic_string<char>, int>, (lambda at solution.cpp:7:13), std::allocator<std::pair<std::__cxx11::basic_string<char>, int>>>, std::hash<std::string>, std::equal_to<std::__cxx11::basic_string<char>>, std::allocator<std::pair<const std::__cxx11::basic_string<char>, std::set<std::pair<std::__cxx11::basic_string<char>, int>, (lambda at solution.cpp:7:13), std::allocator<std::pair<std::__cxx11::basic_string<char>, int>>>>>>::operator[]' requested here
        data[key].insert({value, timestamp});
            ^
Line 2: Char 13: note: candidate constructor (the implicit copy constructor) not viable: requires 1 argument, but 0 were provided
auto comp = [](item const& p1, item const& p2) {return p1.second < p2.second;};
            ^
Line 2: Char 13: note: candidate constructor (the implicit move constructor) not viable: requires 1 argument, but 0 were provided
1 error generated.

In this line : data[key].insert({value, timestamp});, a new set (with my custom comparator) is tentatively instantiated as a value of the map, but that fails. What causes this failure and how to fix it?


Solution

The value of unordered_map is set. set requires a comparator object which should be default constructible. Unfortunately, lambdas are default constructible since C++20, so compile your code at least with this standard, or use functor object like:

struct Comp {
    bool operator()(const item& lhs, const item& rhs) const {
        return lhs.second < rhs.second;
    }
};

class TimeMap 
{
    unordered_map<string, set<item, Comp> > data;

instead of lambda.



Answered By - rafix07
Answer Checked By - Mildred Charles (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Tuesday, October 25, 2022

[FIXED] What is data structure used for Elasticsearch flattened type

 October 25, 2022     data-structures, elasticsearch     No comments   

Issue

I was trying to find how flattened type in Elasticsearch works under the hood, the documentation specifies that all leaf values will be indexed into a single field as a keyword, as a result, there will be a dedicated index for all those flattened keywords.

From documentation:

By default, Elasticsearch indexes all data in every field and each indexed field has a dedicated, optimized data structure. For example, text fields are stored in inverted indices, and numeric and geo fields are stored in BKD trees.

The specific case that I am trying to understand:

If I have flattened field and index object with nested objects there is the ability to query a specific nested key in the flattened object. See how to query by labels.release:

PUT bug_reports
{
  "mappings": {
    "properties": {
      "labels": {
        "type": "flattened"
      }
    }
  }
}

POST bug_reports/_doc/1
{
  "labels": {
    "priority": "urgent",
    "release": ["v1.2.5", "v1.3.0"]
  }
}
    
POST bug_reports/_search
{
  "query": {
    "term": {"labels.release": "v1.3.0"}
  }
}

Would flattened field have the same index structure as the keyword field, and how it is able to reference the specific child key of flattened object?


Solution

The initial design and implementation of the flattened field type is described in this issue. The leaf keys are also indexed along with the leaf values, which is how they are allowing the search for a specific sub-field.

There are some ongoing improvements to the flattened field type and Elastic would also like to support numeric values, but that's not yet released.



Answered By - Val
Answer Checked By - Gilberto Lyons (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Monday, October 17, 2022

[FIXED] How many integers can I create in 1GB memory?

 October 17, 2022     data-structures, integer, language-agnostic, memory-consumption     No comments   

Issue

In book Algorithms fourth edition by Robert Sedgewick on page 200, it says "for example, if you have 1GB of memory on your computer (1 billion bytes), you cannot fit more than about 32 million int values."

I got confused after my calculation: 1,000,000,000 bytes/4 bytes = 250 million

How the author got 32 million?

The book describes like below:

enter image description here


Solution

The author has acknowledged that this is an error in this book website, please refer to the link as follows: http://algs4.cs.princeton.edu/errata/errata-printing3.php



Answered By - Max
Answer Checked By - Mildred Charles (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Tuesday, September 20, 2022

[FIXED] How does std::unordered_map differentiate between values in the same bucket?

 September 20, 2022     c++, data-structures, hashmap, unordered-map     No comments   

Issue

I know that std::unordered_map handles key hash collisions by chaining keys with the same hash in a bucket (I think using a linked list). The question I have is how does it know which value, in the same bucket, corresponds to which key?

The naive idea I have is that each value is stored in a std::pair, is this how it is done?


Solution

Yep, that's basically how it's done. Keep in mind that the key is part of the data. One of the things you can do with a map is iterate through its key/value pairs. This would be impossible, even with perfect hashing, if the key itself were not stored.



Answered By - Sneftel
Answer Checked By - Katrina (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How to convert List to Map?

 September 20, 2022     data-conversion, data-structures, hashmap, java, list     No comments   

Issue

Recently I have conversation with a colleague about what would be the optimal way to convert List to Map in Java and if there any specific benefits of doing so.

I want to know optimal conversion approach and would really appreciate if any one can guide me.

Is this good approach:

List<Object[]> results;
Map<Integer, String> resultsMap = new HashMap<Integer, String>();
for (Object[] o : results) {
    resultsMap.put((Integer) o[0], (String) o[1]);
}

Solution

List<Item> list;
Map<Key,Item> map = new HashMap<Key,Item>();
for (Item i : list) map.put(i.getKey(),i);

Assuming of course that each Item has a getKey() method that returns a key of the proper type.



Answered By - Jim Garrison
Answer Checked By - Terry (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] Why can't HashMap have different NaN values as keys?

 September 20, 2022     data-structures, hashmap, java     No comments   

Issue

import java.util.*;

public class MyClass {
    public static void main(String args[]) {
      Map<Float, Integer> m = new HashMap<>();
      
      m.put(Float.intBitsToFloat(0x7f800001), 1);
      m.put(Float.intBitsToFloat(0x7f800002), 2);
      
      System.out.println(m.size());
    }
}

Why does the above code return 1 as the size of m? 0x7f800001 and 0x7f800002 are both NaN float values, but since NaN != NaN by definition, this should not cause a collision.

The behavior is similar to the documented handling of null keys in the Java Hashmap, but I can't find anything that indicates that NaN is handled as null by HashMap.


Solution

Float.equals(Object) is documented as:

Compares this object against the specified object. The result is true if and only if the argument is not null and is a Float object that represents a float with the same value as the float represented by this object. For this purpose, two float values are considered to be the same if and only if the method floatToIntBits(float) returns the identical int value when applied to each.

Now that sounds like different NaN values should be treated as non-equal, but floatToIntBits documentation includes (emphasis mine):

If the argument is NaN, the result is 0x7fc00000.

In all cases, the result is an integer that, when given to the intBitsToFloat(int) method, will produce a floating-point value the same as the argument to floatToIntBits (except all NaN values are collapsed to a single "canonical" NaN value).

So basically, Float.equals (and Float.hashCode, which also uses floatToIntBits(float)) treats all NaN values as equal.



Answered By - Jon Skeet
Answer Checked By - Katrina (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Thursday, September 15, 2022

[FIXED] How to print binary tree diagram in Java?

 September 15, 2022     binary-tree, data-structures, java, printing     No comments   

Issue

How can I print a binary tree in Java so that the output is like:

   4 
  / \ 
 2   5 

My node:

public class Node<A extends Comparable> {
    Node<A> left, right;
    A data;

    public Node(A data){
        this.data = data;
    }
}

Solution

I've created simple binary tree printer. You can use and modify it as you want, but it's not optimized anyway. I think that a lot of things can be improved here ;)

import java.util.ArrayList;
import java.util.Collections;
import java.util.List;

public class BTreePrinterTest {

    private static Node<Integer> test1() {
        Node<Integer> root = new Node<Integer>(2);
        Node<Integer> n11 = new Node<Integer>(7);
        Node<Integer> n12 = new Node<Integer>(5);
        Node<Integer> n21 = new Node<Integer>(2);
        Node<Integer> n22 = new Node<Integer>(6);
        Node<Integer> n23 = new Node<Integer>(3);
        Node<Integer> n24 = new Node<Integer>(6);
        Node<Integer> n31 = new Node<Integer>(5);
        Node<Integer> n32 = new Node<Integer>(8);
        Node<Integer> n33 = new Node<Integer>(4);
        Node<Integer> n34 = new Node<Integer>(5);
        Node<Integer> n35 = new Node<Integer>(8);
        Node<Integer> n36 = new Node<Integer>(4);
        Node<Integer> n37 = new Node<Integer>(5);
        Node<Integer> n38 = new Node<Integer>(8);

        root.left = n11;
        root.right = n12;

        n11.left = n21;
        n11.right = n22;
        n12.left = n23;
        n12.right = n24;

        n21.left = n31;
        n21.right = n32;
        n22.left = n33;
        n22.right = n34;
        n23.left = n35;
        n23.right = n36;
        n24.left = n37;
        n24.right = n38;

        return root;
    }

    private static Node<Integer> test2() {
        Node<Integer> root = new Node<Integer>(2);
        Node<Integer> n11 = new Node<Integer>(7);
        Node<Integer> n12 = new Node<Integer>(5);
        Node<Integer> n21 = new Node<Integer>(2);
        Node<Integer> n22 = new Node<Integer>(6);
        Node<Integer> n23 = new Node<Integer>(9);
        Node<Integer> n31 = new Node<Integer>(5);
        Node<Integer> n32 = new Node<Integer>(8);
        Node<Integer> n33 = new Node<Integer>(4);

        root.left = n11;
        root.right = n12;

        n11.left = n21;
        n11.right = n22;

        n12.right = n23;
        n22.left = n31;
        n22.right = n32;

        n23.left = n33;

        return root;
    }

    public static void main(String[] args) {

        BTreePrinter.printNode(test1());
        BTreePrinter.printNode(test2());

    }
}

class Node<T extends Comparable<?>> {
    Node<T> left, right;
    T data;

    public Node(T data) {
        this.data = data;
    }
}

class BTreePrinter {

    public static <T extends Comparable<?>> void printNode(Node<T> root) {
        int maxLevel = BTreePrinter.maxLevel(root);

        printNodeInternal(Collections.singletonList(root), 1, maxLevel);
    }

    private static <T extends Comparable<?>> void printNodeInternal(List<Node<T>> nodes, int level, int maxLevel) {
        if (nodes.isEmpty() || BTreePrinter.isAllElementsNull(nodes))
            return;

        int floor = maxLevel - level;
        int endgeLines = (int) Math.pow(2, (Math.max(floor - 1, 0)));
        int firstSpaces = (int) Math.pow(2, (floor)) - 1;
        int betweenSpaces = (int) Math.pow(2, (floor + 1)) - 1;

        BTreePrinter.printWhitespaces(firstSpaces);

        List<Node<T>> newNodes = new ArrayList<Node<T>>();
        for (Node<T> node : nodes) {
            if (node != null) {
                System.out.print(node.data);
                newNodes.add(node.left);
                newNodes.add(node.right);
            } else {
                newNodes.add(null);
                newNodes.add(null);
                System.out.print(" ");
            }

            BTreePrinter.printWhitespaces(betweenSpaces);
        }
        System.out.println("");

        for (int i = 1; i <= endgeLines; i++) {
            for (int j = 0; j < nodes.size(); j++) {
                BTreePrinter.printWhitespaces(firstSpaces - i);
                if (nodes.get(j) == null) {
                    BTreePrinter.printWhitespaces(endgeLines + endgeLines + i + 1);
                    continue;
                }

                if (nodes.get(j).left != null)
                    System.out.print("/");
                else
                    BTreePrinter.printWhitespaces(1);

                BTreePrinter.printWhitespaces(i + i - 1);

                if (nodes.get(j).right != null)
                    System.out.print("\\");
                else
                    BTreePrinter.printWhitespaces(1);

                BTreePrinter.printWhitespaces(endgeLines + endgeLines - i);
            }

            System.out.println("");
        }

        printNodeInternal(newNodes, level + 1, maxLevel);
    }

    private static void printWhitespaces(int count) {
        for (int i = 0; i < count; i++)
            System.out.print(" ");
    }

    private static <T extends Comparable<?>> int maxLevel(Node<T> node) {
        if (node == null)
            return 0;

        return Math.max(BTreePrinter.maxLevel(node.left), BTreePrinter.maxLevel(node.right)) + 1;
    }

    private static <T> boolean isAllElementsNull(List<T> list) {
        for (Object object : list) {
            if (object != null)
                return false;
        }

        return true;
    }

}

Output 1 :

         2               
        / \       
       /   \      
      /     \     
     /       \    
     7       5       
    / \     / \   
   /   \   /   \  
   2   6   3   6   
  / \ / \ / \ / \ 
  5 8 4 5 8 4 5 8 

Output 2 :

       2               
      / \       
     /   \      
    /     \     
   /       \    
   7       5       
  / \       \   
 /   \       \  
 2   6       9   
    / \     /   
    5 8     4   


Answered By - michal.kreuzman
Answer Checked By - Robin (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Wednesday, August 10, 2022

[FIXED] Why does C# System.Decimal (decimal) "waste" bits?

 August 10, 2022     c#, cpu-architecture, data-structures, decimal, internals     No comments   

Issue

As written in the official docs the 128 bits of System.Decimal are filled like this:

The return value is a four-element array of 32-bit signed integers.

The first, second, and third elements of the returned array contain the low, middle, and high 32 bits of the 96-bit integer number.

The fourth element of the returned array contains the scale factor and sign. It consists of the following parts:

Bits 0 to 15, the lower word, are unused and must be zero.

Bits 16 to 23 must contain an exponent between 0 and 28, which indicates the power of 10 to divide the integer number.

Bits 24 to 30 are unused and must be zero.

Bit 31 contains the sign: 0 mean positive, and 1 means negative.

With that in mind one can see that some bits are "wasted" or unused.

Why not for example 120 bits of integer, 7 bits of exponent and 1 bit of sign.

Probably there is a good reason for a decimal being the way it is. This question would like to know the reasoning behind that decision.


Solution

Based on Kevin Gosse's comment

For what it's worth, the decimal type seems to predate .net. The .net framework CLR delegates the computations to the oleaut32 lib, and I could find traces of the DECIMAL type as far back as Windows 95

I searched further and found a likely user of the DECIMAL code in oleauth32 Windows 95.

The old Visual Basic (non .NET based) and VBA have a sort-of-dynamic type called 'Variant'. In there (and only in there) you could save something nearly identical to our current System.Decimal.

Variant is always 128 bits with the first 16 bits reserved for an enum value of which data type is inside the Variant.

The separation of the remaining 112 bits could be based on common CPU architectures in the early 90'ies or ease of use for the Windows programmer. It sounds sensible to not pack exponent and sign in one byte just to have one more byte available for the integer.

When .NET was built the existing (low level) code for this type and it's operations was reused for System.Decimal.

Nothing of this is 100% verified and I would have liked the answer to contain more historical evidence but that's what I could puzzle together.



Answered By - Tom
Answer Checked By - David Marino (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Sunday, July 31, 2022

[FIXED] How to paginate a trie, so you can find 100 words matching prefix in the trie, starting at a paginated offset?

 July 31, 2022     algorithm, data-structures, javascript, pagination, trie     No comments   

Issue

I am using a trie made to work on alphabets/scripts from any language.

class TrieNode {
  constructor(key) {
    // the "key" value will be the character in sequence
    this.key = key;

    // we keep a reference to parent
    this.parent = null;

    // we have hash of children
    this.children = {};

    // check to see if the node is at the end
    this.end = false;
  }

  getWord() {
    let output = [];
    let node = this;

    while (node !== null) {
      output.unshift(node.key)
      node = node.parent
    }

    return output.join('')
  }
}

class Trie {
  constructor() {
    this.base = new TrieNode(null)
  }

  insert(word) {
    let node = this.base

    const points = Array.from(word)

    for (const i in points) {
      const point = points[i]
      if (!node.children[point]) {
        const child = node.children[point] = new TrieNode(point)
        child.parent = node
      }

      node = node.children[point]

      if (i == word.length - 1) {
        node.end = true
      }
    }
  }

  contains(word) {
    let node = this.base

    const points = Array.from(word)

    for (const i in points) {
      const point = points[i]

      if (node.children[point]) {
        node = node.children[point]
      } else {
        return false
      }
    }

    return node.end;
  }

  find(prefix) {
    let node = this.base
    let output = []

    const points = Array.from(prefix)

    for (const i in points) {
      const point = points[i]

      // make sure prefix actually has words
      if (node.children[point]) {
        node = node.children[point]
      } else {
        // there's none. just return it.
        return output
      }
    }

    const stack = [node]
    while (stack.length) {
      node = stack.shift()
      // base case, if node is at a word, push to output
      if (node.end) {
        output.unshift(node.getWord())
      }

      // iterate through each children, call recursive findAllWords
      for (var child in node.children) {
        stack.push(node.children[child])
      }
    }

    return output
  }
}

Here is how you can find all words matching prefix. These scrabble words were used (tmp/scrabble.csv).

const fs = require('fs')
const Trie = require('./Trie')

const words = fs.readFileSync('tmp/scrabble.csv', 'utf-8')
  .trim()
  .split(/\n+/)
  .map(x => x.trim())

const trie = new Trie()

words.forEach(word => trie.insert(word))

console.log(trie.find('zy'))
[
  'zygodactylous', 'zygomorphies', 'zygapophysis',
  'zygapophyses',  'zygomorphic',  'zymologies',
  'zygospores',    'zygosities',   'zygomorphy',
  'zygodactyl',    'zymurgies',    'zymograms',
  'zymogenes',     'zygotenes',    'zygospore',
  'zygomatic',     'zyzzyvas',     'zymosans',
  'zymology',      'zymogram',     'zymogens',
  'zymogene',      'zygotene',     'zygosity',
  'zygomata',      'zyzzyva',      'zymurgy',
  'zymotic',       'zymosis',      'zymoses',
  'zymosan',       'zymogen',      'zymases',
  'zygotic',       'zygotes',      'zygosis',
  'zygoses',       'zygomas',      'zydecos',
  'zymase',        'zygote',       'zygose',
  'zygoma',        'zygoid',       'zydeco',
  'zymes',         'zyme'
]

However, search find('a') will find over 10,000 matches (words starting with a). If this were a web app, you might not want to show all 10,000 at once, maybe only 100 or 1,000 at a time. How could you implement a pagination system against this trie? Is it possible?

From my initial thought-attempts, I was trying to think how you could keep track of the "path" you left off with in the trie (say we are paginating 100 matches/words at a time). So I am trying to modify this function:

class Trie {
  // ...

  find(prefix, max = 100) {
    let node = this.base
    let output = []

    const points = Array.from(prefix)

    for (const i in points) {
      const point = points[i]

      // make sure prefix actually has words
      if (node.children[point]) {
        node = node.children[point]
      } else {
        // return path [-1] since there is no match?
        return [output, [-1]]
      }
    }

    const stack = [node]
    const pathStack = [[0]]
    while (stack.length) {
      const path = pathStack.shift()
      node = stack.shift()
      // base case, if node is at a word, push to output
      if (node.end) {
        output.unshift(node.getWord())
        if (output.length === max) {
          return [output, path]
        }
      }

      // iterate through each children, call recursive findAllWords
      let i = 0
      for (var child in node.children) {
        pathStack.push(path.concat(i++))
        stack.push(node.children[child])
      }
    }

    return [output, [-1]]
  }
}

Maybe this is partially correct? But I don't see how to use the path to jump back to the right spot on the next query.

[
  [
    'abye', 'abut', 'abri', 'abos', 'ably', 'able', 'abet', 'abed',
    'abbe', 'abba', 'abas', 'aals', 'aahs', 'azo',  'ays',  'aye',
    'axe',  'awn',  'awl',  'awe',  'awa',  'avo',  'ave',  'ava',
    'auk',  'att',  'ate',  'ass',  'asp',  'ask',  'ash',  'art',
    'ars',  'arm',  'ark',  'arf',  'are',  'arc',  'arb',  'apt',
    'ape',  'any',  'ant',  'ani',  'ane',  'and',  'ana',  'amu',
    'amp',  'ami',  'ama',  'alt',  'als',  'alp',  'all',  'ale',
    'alb',  'ala',  'ait',  'ais',  'air',  'ain',  'aim',  'ail',
    'aid',  'aha',  'ago',  'age',  'aga',  'aft',  'aff',  'adz',
    'ads',  'ado',  'add',  'act',  'ace',  'aby',  'abs',  'abo',
    'aba',  'aas',  'aal',  'aah',  'ay',   'ax',   'aw',   'at',
    'as',   'ar',   'an',   'am',   'al',   'ai',   'ah',   'ag',
    'ae',   'ad',   'ab',   'aa'
  ],
  [ 0, 1, 17, 0 ]
]

And plus, the words appear to be in unusual order, maybe I am not inserting them the correct order?

This way, in theory, if you had the path, then when you call find(prefix, startAtPath), you would pass the startAtPath as a second parameter, and start the search from that point forward, to get the next 100 words. I haven't seen any "trie pagination" examples on the web, so maybe it's not a thing that's possible, not sure.


Solution

In TrieNode.constructor add a this.count property.

In insert, update the counts for the insertion.

In find add startAt and max parameters. As long as node.count < startAt you will skip past node (do not add its children to pathStack) and decrease startAt by node.count.

And then you collect up to max words to return.

This allows you to skip past previous results without having to enumerate them. You do not need to pass in the path, just where you expect to start.

The cause of your order problem appears to be that you are using unshift instead of push.


Here is your code edited to give the right answer. I also fixed a minor bug that your output order depends on the order in which node.children was inserted. This will work if you load the trie in alphabetical order, but not otherwise.

class TrieNode {
  constructor(key) {
    // the "key" value will be the character in sequence
    this.key = key;

    // we keep a reference to parent
    this.parent = null;

    // we have hash of children
    this.children = {};

    // check to see if the node is at the end
    this.end = false;

    // count how many words are at this node or below.
    this.count = 0;

    // Ensure children are in the right order.
    this.lastChildKey = '';
  }

  getWord() {
    let output = [];
    let node = this;

    while (node !== null) {
      output.unshift(node.key)
      node = node.parent
    }

    return output.join('')
  }
}

class Trie {
  constructor() {
    this.base = new TrieNode(null)
  }

  insert(word) {
    let node = this.base

    const points = Array.from(word)

    for (const i in points) {
      const point = points[i]
      node.count++
      if (!node.children[point]) {
        const child = node.children[point] = new TrieNode(point)
        child.parent = node
      }

      node = node.children[point]

      if (i == word.length - 1) {
        node.end = true
      }

      // If words were inserted out of order, fix order of children.
      if (node.lastChildKey <= point) {
        node.lastChildKey = point
      }
      else {
        // Need to fix order of children.
        let keys = Object.keys(node.children)
        keys.sort()
        let orderedChildren = {}
        for (const key in keys) {
          orderedChildren[key] = node.children[key]
        }
        node.children = orderedChildren
      }
    }
  }

  contains(word) {
    let node = this.base

    const points = Array.from(word)

    for (const i in points) {
      const point = points[i]

      if (node.children[point]) {
        node = node.children[point]
      } else {
        return false
      }
    }

    return node.end;
  }
  find(prefix, startAt, maxCount) {
    let node = this.base
    let output = []

    const points = Array.from(prefix)

    for (const i in points) {
      const point = points[i]

      // make sure prefix actually has words
      if (node.children[point]) {
        node = node.children[point]
      } else {
        // there's none. just return it.
        return output
      }
    }

    const stack = [node]
    while (stack.length) {
      if (maxCount <= output.length) {
          return output
      }
      node = stack.shift()
      if (node.count < startAt) {
        startAt -= node.count
      }
      else {
        // base case, if node is at a word, push to output
        if (node.end) {
          if (0 < startAt) {
            startAt--;
          }
          else {
            output.push(node.getWord())
          }
        }

        // iterate through each children, call recursive findAllWords
        for (var child in node.children) {
          stack.push(node.children[child])
        }
      }
    }

    return output
  }
}

exports.Trie = Trie

Here is your demo program edited for the changes.

const fs = require('fs')
const Trie = require('./Trie')

const words = fs.readFileSync('tmp/scrabble.csv', 'utf-8')
  .trim()
  .split(/\n+/)
  .map(x => x.trim())

const trie = new Trie.Trie()

words.forEach(word => trie.insert(word))

console.log(trie.find('a', 0, 12))

console.log(trie.find('a', 6, 6))


Answered By - btilly
Answer Checked By - Timothy Miller (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Monday, July 25, 2022

[FIXED] How can I extract a single value from a JSON response?

 July 25, 2022     data-structures, json, python     No comments   

Issue

I wrote some code to get data from a web API. I was able to parse the JSON data from the API, but the result I gets looks quite complex. Here is one example:

>>> my_json
{'name': 'ns1:timeSeriesResponseType', 'declaredType': 'org.cuahsi.waterml.TimeSeriesResponseType', 'scope': 'javax.xml.bind.JAXBElement$GlobalScope', 'value': {'queryInfo': {'creationTime': 1349724919000, 'queryURL': 'http://waterservices.usgs.gov/nwis/iv/', 'criteria': {'locationParam': '[ALL:103232434]', 'variableParam': '[00060, 00065]'}, 'note': [{'value': '[ALL:103232434]', 'title': 'filter:sites'}, {'value': '[mode=LATEST, modifiedSince=null]', 'title': 'filter:timeRange'}, {'value': 'sdas01', 'title': 'server'}]}}, 'nil': False, 'globalScope': True, 'typeSubstituted': False}

Looking through this data, I can see the specific data I want: the 1349724919000 value that is labelled as 'creationTime'.

How can I write code that directly gets this value?

I don't need any searching logic to find this value. I can see what I need when I look at the response; I just need to know how to translate that into specific code to extract the specific value, in a hard-coded way. I read some tutorials, so I understand that I need to use [] to access elements of the nested lists and dictionaries; but I can't figure out exactly how it works for a complex case.

More generally, how can I figure out what the "path" is to the data, and write the code for it?


Solution

For reference, let's see what the original JSON would look like, with pretty formatting:

>>> print(json.dumps(my_json, indent=4))
{
    "name": "ns1:timeSeriesResponseType",
    "declaredType": "org.cuahsi.waterml.TimeSeriesResponseType",
    "scope": "javax.xml.bind.JAXBElement$GlobalScope",
    "value": {
        "queryInfo": {
            "creationTime": 1349724919000,
            "queryURL": "http://waterservices.usgs.gov/nwis/iv/",
            "criteria": {
                "locationParam": "[ALL:103232434]",
                "variableParam": "[00060, 00065]"
            },
            "note": [
                {
                    "value": "[ALL:103232434]",
                    "title": "filter:sites"
                },
                {
                    "value": "[mode=LATEST, modifiedSince=null]",
                    "title": "filter:timeRange"
                },
                {
                    "value": "sdas01",
                    "title": "server"
                }
            ]
        }
    },
    "nil": false,
    "globalScope": true,
    "typeSubstituted": false
}

That lets us see the structure of the data more clearly.

In the specific case, first we want to look at the corresponding value under the 'value' key in our parsed data. That is another dict; we can access the value of its 'queryInfo' key in the same way, and similarly the 'creationTime' from there.

To get the desired value, we simply put those accesses one after another:

my_json['value']['queryInfo']['creationTime'] # 1349724919000


Answered By - dm03514
Answer Checked By - Katrina (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Tuesday, June 28, 2022

[FIXED] Why my code is giving Time Limit Exceeded?

 June 28, 2022     c++, coding-style, data-structures, depth-first-search, graph     No comments   

Issue

Today while solving a question on leetcode, I applied dfs on a directed graph which runs on O(N) time, but my code is giving TLE, so after trying too many time I checked on comments and there was a accepted code which also runs on O(N). So now I am confused as why my code is not getting accepted and giving time limit exceeded. My code:-

public:
    int ans=INT_MIN;
    vector<vector<int>> gr;
    void dfs(int head, int time, vector<int> inform){
        if(gr[head].size()==0) {ans=max(ans,time);return;}
        for(auto next:gr[head]){
            dfs(next, inform[head]+time, inform);
        }
    }
    int numOfMinutes(int n, int headID, vector<int>& manager, vector<int>& informTime) {
        gr.resize(n);
        for(int i=0 ; i<n ; i++){
            if(manager[i]!=-1) gr[manager[i]].push_back(i);
        }
        dfs(headID,0, informTime);
        return ans;
    }

One of accepted code:-

    int numOfMinutes(int n, int headID, vector<int>& manager, vector<int>& informTime) {
        int res = 0;
        for (int i = 0; i < n; ++i)
            res = max(res, dfs(i, manager, informTime));
        return res;
    }

    int dfs(int i, vector<int>& manager, vector<int>& informTime) {
        if (manager[i] != -1) {
            informTime[i] += dfs(manager[i], manager, informTime);
            manager[i] = -1;
        }
        return informTime[i];
    }

If anyone needs question link:- https://leetcode.com/problems/time-needed-to-inform-all-employees/


Solution

In your dfs() function, you pass inform by value, which means the compiler makes a copy of inform every time you call the function, not the inform itself.

You should pass by reference instead.

void dfs(int head, int time, vector<int> &inform)


Answered By - justANewb stands with Ukraine
Answer Checked By - Mildred Charles (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How should I find cycle in the directed graph and list out the nodes which are forming the cycle?

 June 28, 2022     algorithm, data-structures, graph, javascript     No comments   

Issue

I have the array of objects in javascript i am trying to draw the directed graph how should i find whether it contains cycle or not if contains what are the elements forming the cycle Graph is not strongly connected and nodes can be isolated like "f"

 array = {};
//operations   parents
    array[a] = [b,c]
    array[b] = [d,c]
    array[e] = [a,b]
    array[d] = [e]
    array[f] = []

considering the above data- assume keys as child and values as parents made a directed graph which looks like this

I want to find the cycle between the operations like here we have cycle from e-d-b-e? How should i find the cycle? I am using javascript.


Solution

Here is a BFS solution that will find one cycle (if there are any), which will be (one of) the shortest one(s).

function getCycle(graph) {
    // Copy the graph, converting all node references to String
    graph = Object.assign(...Object.keys(graph).map( node =>
                ({ [node]: graph[node].map(String) }) 
    ));

    let queue = Object.keys(graph).map( node => [node] );
    while (queue.length) {
        const batch = [];
        for (const path of queue) {
            const parents = graph[path[0]] || [];
            for (const node of parents) {
                if (node === path[path.length-1]) return [node, ...path];
                batch.push([node, ...path]);
            }
        }
        queue = batch;
    }
}

// First example
var graph = {
    a: ['b', 'c'],
    b: ['d', 'c'],
    e: ['a', 'b'],
    d: ['e']
};
var result = getCycle(graph);
console.log(result);

// Second example (numeric node references)
var graph = {
    0: [4],
    1: [4,0],
    2: [0,1],
    3: [1],
    4: [3]
};
var result = getCycle(graph);
console.log(result);
.as-console-wrapper { max-height: 100% !important; top: 0; }

NB: If you define your graph with number references, a type conversion is needed since object properties are always of type string. We could use == instead of ===, but then the output will still be a mix of string and numbers; I think it is nicer to first convert every node reference to string.



Answered By - trincot
Answer Checked By - David Marino (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Monday, June 27, 2022

[FIXED] How to keep track of depth in breadth first search?

 June 27, 2022     algorithm, data-structures, graph, graph-algorithm     No comments   

Issue

I have a tree as input to the breadth first search and I want to know as the algorithm progresses at which level it is?

# Breadth First Search Implementation
graph = { 
    'A':['B','C','D'],
    'B':['A'],
    'C':['A','E','F'],
    'D':['A','G','H'],
    'E':['C'],
    'F':['C'],
    'G':['D'],
    'H':['D']
    }


def breadth_first_search(graph,source):
    """
    This function is the Implementation of the breadth_first_search program
    """
    # Mark each node as not visited
    mark = {}
    for item in graph.keys():
        mark[item] = 0

    queue, output = [],[]

    # Initialize an empty queue with the source node and mark it as explored
    queue.append(source)
    mark[source] = 1
    output.append(source)

    # while queue is not empty
    while queue:
        # remove the first element of the queue and call it vertex
        vertex = queue[0]
        queue.pop(0)
        # for each edge from the vertex do the following
        for vrtx in graph[vertex]:
            # If the vertex is unexplored
            if mark[vrtx] == 0:
                queue.append(vrtx)  # mark it as explored
                mark[vrtx] = 1      # and append it to the queue
                output.append(vrtx) # fill the output vector
    return output

print breadth_first_search(graph, 'A')

It takes tree as an input graph, what I want is, that at each iteration it should print out the current level which is being processed.


Solution

You don't need to use extra queue or do any complicated calculation to achieve what you want to do. This idea is very simple.

This does not use any extra space other than queue used for BFS.

The idea I am going to use is to add null at the end of each level. So the number of nulls you encountered +1 is the depth you are at. (of course after termination it is just level).

     int level = 0;
     Queue <Node> queue = new LinkedList<>();
     queue.add(root);
     queue.add(null);
     while(!queue.isEmpty()){
          Node temp = queue.poll();
          if(temp == null){
              level++;
              queue.add(null);
              if(queue.peek() == null) break;// You are encountering two consecutive `nulls` means, you visited all the nodes.
              else continue;
          }
          if(temp.right != null)
              queue.add(temp.right);
          if(temp.left != null)
              queue.add(temp.left);
     }


Answered By - Karthik
Answer Checked By - Pedro (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] What is represent by edges when facebook using graph data structure

 June 27, 2022     data-structures, facebook-graph-api, graph     No comments   

Issue

when learning graph data structure I learned that facebook also use it.And I know nodes represent persons. is edge represent connections between them? when we become a friend with them is there a edge between me and that new friend?


Solution

yes. new connection established and to show connections those social networks use breadth first search(BFS)



Answered By - Dinithi
Answer Checked By - Robin (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] What algorithm would I use to implement a modified graph search problem?

 June 27, 2022     algorithm, data-structures, graph     No comments   

Issue

A car trip where there are gas stations with unlimited gasoline and parts for one time use.The parts permanately increase the size of the gas tank (measured in mileage) but the parts, once used, cannot be used again. The car itself originally has a tank that can travel a certain number of miles. What would be the best way to find out the initial minimum gas tank size needed to traverse between two Gas Stations as well as finding out what gas stations can be initially traversed using a certain gas tank?

I initially tried Dijkstra but it doesn't work on negative values and I don't think a Minimum Spanning Tree is the best because it doesn't necessairly minimize the distance between two nodes. My initial thought would be to find an algorithm that can return the total minimized edge weights (including negatives) of two values but not sure how to do that


Solution

The key observation is that if you can get to any n different gas stations, then you can go to all of them, collect all of their gas tank enhancements, and drive around between all reachable stations however much you like.

Since it only matters which stations you can connect to, you don't have to remember anything about the paths required to get there, and you can solve this using a simple variant of an MST algorithm.

As we perform Prim's algorithm, for example, we can keep track of the total size of all reachable gas tank augments.

Perform Prim's algorithm starting at the initial station, until you get to the destination. Whenever you add a new station, subtract the current total augment size from the edge distance to figure out how much initial tank it takes to get there. Then add the station's augment to your total.

Your answer is the maximum initial tank requirement that you discover until you reach the destination.



Answered By - Matt Timmermans
Answer Checked By - Pedro (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Sunday, June 26, 2022

[FIXED] What is the distinction between sparse and dense graphs?

 June 26, 2022     data-structures, graph, graph-theory     No comments   

Issue

I read it is ideal to represent sparse graphs by adjacency lists and dense graphs by an adjacency matrix. But I would like to understand the main difference between sparse and dense graphs.


Solution

Dense graph is a graph in which the number of edges is close to the maximal number of edges. Sparse graph is a graph in which the number of edges is close to the minimal number of edges. Sparse graph can be a disconnected graph.



Answered By - CAMOBAP
Answer Checked By - Marie Seifert (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Thursday, May 19, 2022

[FIXED] how to test if a variable is valid or whether it is initialized or not in C?

 May 19, 2022     c, data-structures, initialization, variables     No comments   

Issue

I am learning C from scratch with Harvard's cs50 course. I've been given an array that's been initialized this way:

int stuff[9][9];

now I have to handle it.
I want to check if each item of the array exists: if not, make stuff[i][j] = 0, otherwise, stuff[i][j]++
But I am searching without resulting on how to check if the variable I am manipulating exists or is valid or whatever: there is no !!stuff[i][j], nor some typeof stuff[i][j] or comparing if (stuff[i][j] == 'undefined') or NULL or any variations of that which I can use...
So, how can I check if a declared variable has not yet been initialized?


update
I've made a test with this:

int a[3];
for(int i = 0; i < 3; i++)
{
    a[i] = a[i] || 0;
}

for(int i = 0; i < 3; i++)
{
    printf("a[%i] -> %i\n", i, a[i]);
}

so, if a[i] didn't exist (i.e. had no value assigned to it), it would receive value 0. this was the output of the printf:

a[0] -> 1
a[1] -> 1
a[2] -> 0

That's good because this approach didn't throw an error, but... what are those numbers??


Solution

Assuming the above array is a local variable and not a global, the values of the array are uninitialized and in fact you can't check whether a variable is uninitialized or not, as simply attempting to read such a variable can trigger undefined behavior.

It sounds like you want all array elements to start with the value 0. The simplest way to do this is to initialize the array as such:

int stuff[9][9] = {{ 0 }};

Which explicitly sets element [0][0] to 0 and implicitly sets all other elements to 0.

Alternately, you can create a loop to set all values to 0 to start before doing your "regular" processing.



Answered By - dbush
Answer Checked By - Pedro (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg
Older Posts Home
View mobile version

Total Pageviews

Featured Post

Why Learn PHP Programming

Why Learn PHP Programming A widely-used open source scripting language PHP is one of the most popular programming languages in the world. It...

Subscribe To

Posts
Atom
Posts
All Comments
Atom
All Comments

Copyright © PHPFixing