jeudi 30 juin 2016

bool inside struct changes value nonsensically when put in vector [on hold]


The program below creates random numbers between 20 and 60, checks if they are prime, creates an instance of the stuct p, saves both values as n (number) and b (is-prime). Then it saves the struct into a vector. When the b value of the last element in the vector is retrieved it has changed (to a random int), also, sometimes when I compile and run it suddenly stops responding. Why is this?

bool isPrimo(int i) {
    for(int f = 2; f*f <= i; ++f)
        if(i%f == 0)
            return false;
    return true;
}
struct p {
    int n;
    bool b;
};
int main(){
    int const CANT = 15;

    srand(time(NULL));
    vector<p> j;
    for(int i = 0; i < CANT; i++) {
        p ni;
        ni.n = 20 + rand()%61;
        ni.b = isPrimo(ni.n);
        j.push_back(ni);
        cout<<j.back().n<<' '<<j.back().b<<' '<<ni.b<<endl;
    }
}

I have cleaned the project and the compiler is minGW (essentials: binutils 2.26, GCC 5.3.0, mingw-w64 4.0.6) C++14, IDE is CodeBlocks.


iOS Swift PayPal Always In Sandbox


I have successfully integrated PayPal into my app. However, it seems to be stuck in Sandbox mode. Here is my code to initialize it:

PayPalMobile .initializeWithClientIdsForEnvironments([PayPalEnvironmentProduction: "APP-0XU423690N0796541"])

As can been seen, I'm not even specifying a sandbox ID.

My code to initiate the payment is:

let payPalConfig = PayPalConfiguration()

        payPalConfig.merchantName = "MacCrafters Software"

        let item1 = PayPalItem(name: "Donation", withQuantity: 1, withPrice: NSDecimalNumber(string: amountField.text!), withCurrency: "USD", withSku: "Donation")

        let items = [item1]
        let subtotal = PayPalItem.totalPriceForItems(items)

        // Optional: include payment details
        let shipping = NSDecimalNumber(string: "0.00")
        let tax = NSDecimalNumber(string: "0.00")
        let paymentDetails = PayPalPaymentDetails(subtotal: subtotal, withShipping: shipping, withTax: tax)

        let total = subtotal.decimalNumberByAdding(shipping).decimalNumberByAdding(tax)

        let payment = PayPalPayment(amount: total, currencyCode: "USD", shortDescription: "Donation", intent: .Sale)

        payment.items = items
        payment.paymentDetails = paymentDetails

        if (payment.processable) {
            let paymentViewController = PayPalPaymentViewController(payment: payment, configuration: payPalConfig, delegate: self)
            presentViewController(paymentViewController!, animated: true, completion: nil)
        }

At the bottom of the PayPal view it always says "Mock Data". I get the same results no matter if I'm in the simulator or on a device. What am I doing wrong?


How to dynamically create state machine using boost msm


I have no idea yet how to create a FSM using boost msm dynamically, such as reading template xml-files, for instance, that describe the machine. Any idea how to address the problem? I want to use the functor approach with boost msm 1.61. Thanks in advance for ideas and my apologies for not presenting anything yet, but I really have to clue so far...

UPDATE

I have made a little progress such that I can create a base class for the front end the common way:

class SMBase : public msmf::state_machine_def<SMBase>
{
 ...
};
using SMBaseBackend = msm::back::state_machine<SMBase>;

class SMDerived : public SMBase
{
 ...
};
using SMDerivedBackend = msm::back::state_machine<SMDerived>;


class SMDerived2 : public SMBase
{
 ...
};
using SMDerived2Backend = msm::back::state_machine<SMDerived2>;

However, the state machine itself is steered by the backend, and I can see no way so far chosing the latter on runtime (for instance using a

map<int, smart_pointer<SMBaseBackend> >

). Any help is appreciated.


'sliderChanged: unrecognized selector sent to instance'


I have created a slider and a text label to display the value of the slider. Both done programmatically:

class FilterCell: UITableViewCell {

override init(style: UITableViewCellStyle, reuseIdentifier: String?) {
    super.init(style: style, reuseIdentifier: reuseIdentifier)
    setupViews()
}

required init?(coder aDecoder: NSCoder) {
    fatalError("init(coder:) has not been implemented")
}

let slider : UISlider = {
    let slider = UISlider()
    slider.minimumValue = 0
    slider.maximumValue = 50
    slider.value = 50
    slider.continuous = true
    slider.userInteractionEnabled = true
    slider.translatesAutoresizingMaskIntoConstraints = false
    return slider
}()

var distanceLabel : UILabel = {
    let label = UILabel()
    label.text = "Distance: 50km"
    label.font = UIFont.systemFontOfSize(15.0)
    label.textColor = UIColor.blackColor()
    label.translatesAutoresizingMaskIntoConstraints = false
    return label
}()

func sliderChanged(sender: UISlider) {
    var sliderValue = sender.value
    distanceLabel.text = "Distance: (sliderValue)km"
}
}

inside my tableview I call the function to change the value of the slider:

func tableView(tableView:UITableView, cellForRowAtIndexPath indexPath:NSIndexPath) -> UITableViewCell {

    let cell = tableView.dequeueReusableCellWithIdentifier(filterCellId) as! FilterCell
    cell.slider.addTarget(self, action: #selector(FilterCell.sliderChanged(_:)), forControlEvents: .ValueChanged)

    return cell

}

The slider and the label display fine. When I interact with the slider to change the value, it crashes and 'sliderChanged:]: unrecognized selector sent to instance 0x7fb06d0df040'.


Multiple lines of text in UILabel in iOS 9


I`m working on an iOS project but when I updated to iOS 9 I had some problem with multiline in UILabels. I'm using Autolayout.

Anybody knows how to do it in iOS 9 ?

I tried different ways such as:

textLabel.lineBreakMode = UILineBreakModeWordWrap;
textLabel.numberOfLines = 0;

(from other similar question) but it did not work.

This is my logic to show multilines:

IB config:

preferedMaxLayout

enter image description here

label config

I update this value programmatically:

  • (void)configureLabelsMaxLayoutWidth { [self.view layoutIfNeeded]; self.titleLabel.preferredMaxLayoutWidth =CGRectGetWidth(self.titleLabel.frame); }

I call this method on the viewWillAppear


Template overload precedence


I want to have two overloads of a template function but have one take precedence. I am trying to define a size() function that uses the size member function if available but falls back to using std::begin() and std::end() (This is needed for say std::forward_list()). This is what they look like:

template <class Container>
constexpr auto size(const Container& cont) -> decltype (cont.size())
{
    return cont.size();
}

template <class Container>
auto size(const Container& cont) -> decltype (
    std::distance(std::begin(cont), std::end(cont)))
{
    return std::distance(std::begin(cont), std::end(cont));
}

The problem is that the compiler can't decide which overload to use for containers with a size() and a begin()/end(). How do I make it choose the first implementation when possible? (I know SFINAE is part of the solution, but I am not knowledgeable enough in the arcane arts to figure it out)

Also (unrelated), is there an easier way to declare the return type for the second function?


Json web-service Swift [on hold]


I'm starting developing an app on iOS, and one of the points is to try to get information from one web-service, the function i'm running returns the error below:

ErrorCould not parse JSON: Error Domain=NSCocoaErrorDomain Code=3840 "The operation couldn’t be completed. (Cocoa error 3840.)" (No value.) UserInfo=0x7fc3729800d0 {NSDebugDescription=No value.}

As much as i know, as i'm using a VM to run OS X 10.10 and using xCode 6.4, the swift version is 1.2

var url:NSURL?

    override func viewDidLoad() {
        super.viewDidLoad()
        // Do any additional setup after loading the view, typically from a nib.

        url = NSURL (string: "htpp://jsonplaceholder.typicode.com/todos/1")
        let urlRequest = NSURLRequest(URL: self.url!)
        let config = NSURLSessionConfiguration.defaultSessionConfiguration()
        let session = NSURLSession(configuration: config)

        let task = session.dataTaskWithRequest(urlRequest, completionHandler:{(data,response,error) in
            if error == nil{
                print("No error")
            } else{
                print("Error")
            }

            let responseData = data

            var error:NSError? = nil
            if let todo: AnyObject = NSJSONSerialization.JSONObjectWithData(responseData, options: nil, error:&error) as? [String:AnyObject] {
                    print("Info completa")

            } else {
                println("Could not parse JSON: (error!)")
            }

        })

        task.resume()


    }

Anybody can help?

Thanks in advance


how to "suppress" DRAMSim2 warnings


I have a PINtool tracing all memory accesses of a program and i use DRAMSim2 to simulate them.

When the PINtool calls the DRAMSim2 to simulate i have the following warning that slows down very much the simulation, because it is printed in the terminal.

WARNING: address 0x7ffda17e68f8 is not aligned to the request size of 32

i found that the code print that warning is in AddressMapping.cpp and is the following:

if ((physicalAddress & transactionMask) != 0) { DEBUG("WARNING: address 0x"<<std::hex<<physicalAddress<<std::dec<<" is not aligned to the request size of "<<transactionSize); }

In the makefile i found that:

OPTFLAGS=-O3 ifdef DEBUG ifeq ($(DEBUG), 1) OPTFLAGS= -O0 -g endif endif CXXFLAGS+=$(OPTFLAGS)

but i dont know how to handle them. (i tried to put on comment the DEBUG command in .cpp file but nothing)

1)how can i suppress this?

2)i believe that is not a wrong to me but how can i resolve it?


Linux LibdvbV5 EIT grabbing - not getting enough days


I'm just starting to write some (C++) code on a Ubuntu 14.04.4 system to access DVB streams via a DVB TV USB tuner. I'm using libdvbv5. I'm in the UK using terrestrial freeview.

Trying to grab the off-air event information (EIT). Managed to do so - produces a list of events with service id, start time, duration, name, description etc. All seems fine - except that it only grabs up to 3 days in advance, whereas I notice that other apps manage to get 7 days in advance.

Had a look at some other projects for this, such as dvbtee and mythtv, but not yet managed to work out what is wrong (lots of code). Nothing I do filters out by date, nor from what I can see, does libdvbv5.

The EIT program id is 0x12, and the full schedule table id is 0x50 (to 0x5f). As I say, it grabs all the information without any errors, but only for 3 days in advance and I know there is definitely more available.

Makes me think I am doing the right thing, but looking in the wrong place? Any suggestions welcome.


AirPlay external screen returning (0, 0, 0, 0) as bounds


I'm working with AirPlay right now to display an AVPlayerLayer. Here is a snippet of my code:

        let secondScreen = UIScreen.screens()[1]
        secondScreen.overscanCompensation = UIScreenOverscanCompensation(rawValue: 3)!
        let screenBounds = secondScreen.bounds

        self.secondWindow = nil // free window when switching between two AirPlay devices
        self.secondWindow = UIWindow.init(frame: screenBounds)
        self.secondWindow?.screen = secondScreen

        layer.removeFromSuperlayer()
        layer.frame = screenBounds
        layer.videoGravity = AVLayerVideoGravityResizeAspect
        self.externalAirPlayView = nil // free view when switching between two AirPlay devices
        self.externalAirPlayView = UIView(frame: screenBounds)
        self.externalAirPlayView!.layer.addSublayer(layer)
        self.secondWindow?.addSubview(self.externalAirPlayView!)
        self.secondWindow?.makeKeyAndVisible()

This code usually works fine, but sometimes I get (0, 0, 0, 0) as the bounds of the external screen. I also get (0, 0, 0, 0) in the UIScreenDidConnectNotification. In both of these cases the AVPlayerLayer does not show up on the AirPlay device because the frame of the window is set incorrectly.

Just a note, if I get (0, 0, 0, 0) as the bounds even once, I will never get the correct bounds again until I either restart the app or I reinitialize the current view controller. Restarting the AirPlay device doesn't seem to help.

Is there a way to get the correct bounds of the external screen?


How do I read JSON on my Golang server that I've posted from iOS using NSData?


I'm trying to validate a receipt by sending it to my custom server from iOS.

I have my NSMutableURLRequest and set it up as so:

    let body: [String: AnyObject] = ["receipt": receipt, "prod_id": productID]

    let optionalJson: NSData?

    do {
        optionalJson = try NSJSONSerialization.dataWithJSONObject(body, options: [])
    } catch _ {
        optionalJson = nil
    }

    guard let json = optionalJson else { return }

    request.HTTPBody = json

Then I send it off to my server which is written in Go, but I don't know how to then read the data.

Previously I sent only the data (in raw form, not a JSON structure), and then turned into a base 64 encoded string as so before shipping it off:

data, _ := ioutil.ReadAll(request.Body)
encodedString := base64.StdEncoding.EncodeToString(data)

But now I the JSON structure that links a prod_id with is a string, as well as the receipt data, which I assume is bytes. How do I extract that into readable JSON then turn it into a base 64 encoded string as above?

I guess I'm not sure what structure the JSON would take.


Can you programmatically determine what app has the audio session on iPhone?


Is it possible to programmatically determine which app has the audio in iOS? My understanding is that the last app to play audio has the session regardless of whether it is still playing audio, or not.

Note: I know the following code is possible (from here: Detecting active AVAudioSessions on iOS device), but it only tells me that audio is playing in another app, not if another app has the audio session but isn't playing:

// query if other audio is playing
BOOL isPlayingWithOthers = [[AVAudioSession sharedInstance] isOtherAudioPlaying];
// test it with...
(isPlayingWithOthers) ? NSLog(@"other audio is playing") : NSLog(@"no other audio is playing");

My app can play background audio and supports BT low energy. It can therefore be in the background and play audio. However, if another app has taken the audio session (for example, Spotify) I'd like to know so that I can do something about it - such as by sending a message to the display of my BTLE device to ask the user to foreground my app.

Not elegant but I'm not sure there's much else I can do?


Sharing FFMPEG video stream data between processes


I'm trying to find a method for sharing FFMPEG library datatypes between two processes.

The project that I'm working on requires one process to buffer the FFMPEG stream that is being received and another process needs to read from the buffer and perform some actions on the video stream. Unfortunately, I can't use a multi-threaded approach for this project. Due to some limitations in my system I have to use separate processes.

The data that I would like to share I have placed in a general struct as follows:

struct FFMPEGData {
AVFormatContext *pFormatCtx;
AVCodecContext  *pCodecCtx;
AVCodec         *pCodec;
AVFrame         *pFrame, dst;
AVPacket        *packet;
AVPacket* pack = new AVPacket[packetNum];
};

The buffering process uses the format context and codec context to read the video stream and then it places packets in the AVPacket array pack. The other process should grab packets from the array and decode them, also using the format and codec contexts.

I looked into the Boost Interprocess library, but that does not seem to be setup to handle this type of situation easily.

Would anyone know a method for sharing my general struct between multiple processes?


The Context Menu for a Shell Extension is Not Appearing in the Folder View in Explorer


I have a Windows Shell Namespace extension mounted at a file location using the desktop.ini file with the CLSID for my namespace extension specified.

[.ShellClassInfo]
CLSID2={abcdef01-abcd-abcd-abcd-abcdef012345}

However, my context menu for the namespace extension only appears in the tree view in explorer and not in the folder view.

When I set a breakpoint in the namespace extension's CreateViewObject method, I can see that when I right click on the folder in the treeview, I get calls with an riid of IID_IDropTarget and IID_IContextMenu. However, when I right click on the folder in the folder view area, I only get a calls with the riid of IID_IDropTarget.

Is there something I need to specify in the registry or in the desktop.ini to properly get the folder view to behave the same way as the tree view?

Note: My definitions of Tree View and Folder View come from the documentation on MSDN.


How to have stored properties in Swift, the same way I had on Objetive-C?


I am switching an application from Objective-C to Swift, which I have a couple of categories with stored properties, for example:

@interface UIView (MyCategory)

- (void)alignToView:(UIView *)view
          alignment:(UIViewRelativeAlignment)alignment;
- (UIView *)clone;

@property (strong) PFObject *xo;
@property (nonatomic) BOOL isAnimating;

@end

As Swift extensions don't accept stored properties like these, I don't know how to maintain the same structure as the Objc code. Stored properties are really important for my app and I believe Apple must have created some solution for doing it in Swift.

As said by jou, what I was looking for was actually using associated objects, so I did (in another context):

import Foundation
import QuartzCore
import ObjectiveC

extension CALayer {
    var shapeLayer: CAShapeLayer? {
        get {
            return objc_getAssociatedObject(self, "shapeLayer") as? CAShapeLayer
        }
        set(newValue) {
            objc_setAssociatedObject(self, "shapeLayer", newValue, UInt(OBJC_ASSOCIATION_RETAIN))
        }
    }

    var initialPath: CGPathRef! {
        get {
            return objc_getAssociatedObject(self, "initialPath") as CGPathRef
        }
        set {
            objc_setAssociatedObject(self, "initialPath", newValue, UInt(OBJC_ASSOCIATION_RETAIN))
        }
    }
}

But I get an EXC_BAD_ACCESS when doing:

class UIBubble : UIView {
    required init(coder aDecoder: NSCoder) {
        ...
        self.layer.shapeLayer = CAShapeLayer()
        ...
    }
}

Any ideas?


ipad - layout getting distorted with autorotate NO


I am using iPad 9.3 simulator. In app settings, I have Portrait, Landscape Left, Landscape Right.

I have one simple View with three blocks in Portrait layout.

  • Block1
  • Block2
  • Block3

  • In my view controller, I have

    -(BOOL)shouldAutorotate
    {
      return NO;
    }
    - (UIInterfaceOrientationMask)supportedInterfaceOrientations
    {
       return UIInterfaceOrientationMaskPortrait;
    }
    

Now

  • I press device Home button which takes app to background
  • I rotate the simulator to landscape right.
  • I re-run application by doing cmd-R

  • Now simulator opens up in landscape orientation.

  • Status bar is also landscape.
  • Width of view is smaller now. There is blank space on LHS.
  • But with view rotated right with Portrait layout.
  • View is distorted meaning blocks become close to each other. This is also due to as width is smaller now as mention above with LHS point.

If I remove the autorotate code, then simulator opens in Portrait orientation and view is not distorted.

Can you please explain why could this be happening?


UINavigationController strange behavior after UIApplicationDidBecomeActiveNotification


I am trying to restore my animation on a UINavigationController based app. In viewWillAppear I do the following:

 override func viewWillAppear(animated: Bool) {
        super.viewWillAppear(animated)
        self.animateButtons()
    }

I have also added these:

 NSNotificationCenter.defaultCenter().addObserver(self, selector: #selector(addAnimation), name: UIApplicationDidBecomeActiveNotification, object: nil)

 NSNotificationCenter.defaultCenter().addObserver(self, selector: #selector(restorePosition), name: UIApplicationDidEnterBackgroundNotification, object: nil)

And this is my start/restore animation:

 func addAnimation() {
      self.animateButtons()
    }

  func restorePosition() {
        self.restoreToOriginalPosition()
    }

So to explain: When controller is loaded I create my buttons self.makeRoundQButtons in my viewDidLoad. Then I animate in viewWillAppear.

Then when entering background I restore their original position self.restoreToOriginalPosition()and I animate them again once active func addAnimation() {...}...

Now this works fine on the "active" view. When I "drill" down on my Navigation Tree, enter background and active again, and use the "back" button to navigate to any "previous" view(s) although viewWillAppear is called NO animation happens. If I move forward and then back again everything works fine...

What am I doing wrong?


CMake: undefined reference to


I use CMake 3.5.2. When trying to build my C++ code I get the following error: [100%] Linking CXX executable SomeExecutable CMakeFiles/SomeExecutable.dir/Common/src/FunctionOne.cpp.o: In function `FunctionOne::FunctionOne(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, double, double, unsigned int, bool)': FunctionOne.cpp:(.text+0x490): undefined reference to `Helper::initLevel(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool)' How do I fix it? CMakeLists.txt: (shortened for brevity) cmake_minimum_required(VERSION 3.0) project(ProjectName) add_definitions(-std=c++11) add_definitions(-Wall) add_definitions(-O2) link_directories(/usr/local/lib) add_executable(SomeExecutable SomeExecutable.cpp ${OptFunctionOne} ${OptFunctionTwo}) target_link_libraries(SomeExecutable -pthread -lboost_thread ${Boost_LIBRARIES} myLib armadillo) File structure (shortened for brevity): ├── bin ├── CMakeCache.txt ├── cmake_install.cmake ├── CMakeLists.txt ├── Common │   ├── include │   │   ├── dataContainer.h │   │   ├── OptFunctionOne.h │   │   ├── OptFunctionTwo.h │   └── src │   ├── OptFunctionOne.cpp │   └── OptFunctionTwo.cpp ├── SomeExecutable.cpp ├── myLib │   ├── example.cpp │   ├── include │   │   ├── Config.h │   │   ├── Helper.h │   ├── libmyLib.so

What should I override in inheritance when it comes to light derivation?


class MyString :public string {
public:
    using string :: string;
    using string :: operator=;
    bool operator== (MyString&);
    bool operator<  (MyString&);
    bool operator>  (MyString&);
    MyString& operator=  (MyString&);
    MyString& operator=  (string&);
    MyString operator() (int, int);
    friend ostream & operator<<(ostream&, MyString&);
};

MyString MyString::operator() (int a, int b) {
    MyString os;
    os = string::substr(a, b);
    return os;
}

Note: I'm using cstring

It's my learning experiment. I am confused when it comes to light derivation like code above.

  1. Suppose I just want to add feature that can get substring by (int, int) operator, but I realize I can't use those functions that take MyString parameters.

  2. I have tried to use using for those operators ==, <, > but that doesn't work. The compiler tells me that string don't have those operator, I wonder it's because those functions in string are not virtual?

  3. Is the code in operator() legal base on my functionality set in public:? The compiler doesn't tell me any thing. But I'm quite skeptical.


object created in function, is it saved on stack or on heap?


I am using c++ specifically: when I create an object in a function, will this object be saved on the stack or on the heap?

reason I am asking is since I need to save a pointer to an object, and the only place the object can be created is within functions, so if I have a pointer to that object and the method finishes, the pointer might be pointing to garbage after. --> if I add a pointer to the object to a list (which is a member of the class) and then the method finishes I might have the element in the list pointing to garbage.

so again - when the object is created in a method, is it saved on the stack (where it will be irrelevant after the function ends) or is it saved on the heap (therefore I can point to it without causing any issues..)?

example:

class blah{ 
private:
    list<*blee> b;
public:
    void addBlee() {
        blee b;
        blee* bp = &b;
        list.push_front(bp);
    }
}

you can ignore syntax issues -- the above is just to understand the concept and dilemma...

Thanks all!


System.UStrClr Access Violation


Porting an old desktop app to the modern age using RAD Studio 10.1 Berlin. App was last built in C++ Builder 6 (many, many moons ago).

Managed to sort out all the component and external library dependencies, but it appears that there is some lingering issues with the Unicode port. The app used to rely heavily on the built-in String type, which now corresponds to AnsiString.

The source code builds, but the binary throws an Access Violation somewhere before any application code executed. The error stack trace:

rtl240.@System@@UstrClr$qqrpv + 0x12
largest_pos
__linkproc__ Attributebitmaps::Initialize 0x18
__init_exit_proc
__wstartup

The largest_pos function does some numerical manipulation - no String dependencies of any kind.

Attributebitmaps is a static class, with no member called Initialize. In Delphi you use to be able to declare an Initialize and Finalize call at the unit level, but that construct is not used in C++ Builder.

Any ideas around why an error would occur in System.UStrClr? Where would you go digging to get more insight into this?


Data structure + algorithm for ipv4 storage - efficient searching in prefixes


I am searching for Data Structure for IPV4. What should it store? Prefix: (base + mask) --> for example 85.23.0.0/16

base = 85.23.0.0 -> 32 bit unsigned

mask = 16 AKA 255.255.0.0 -> char 8 bit unsigned

So min host is 85.23.0.0 and max host is 85.23.255.255 (I know that it should be .0.1 and .255.254 in normal case but I want to simplify it)

The main thing that I require is speed of searching IP in stored prefixes. For example I give unsigned int (32bit) and I need to tell whether it is there or no.

I am writing in C++ so I can use STL

Now it is stored in STL set (pair base + mask) and I am searching one by one, so it is sort of O(n) (Excluding that is it probably BST tree, so walking through the it might be slower than O(n))

To sum up: I don't need efficient way to store IPV4, I need an efficient way to SEARCH it in some data structure. And the data structure won't store port or family type etc. It will store PREFIX (base + mask).

And the I am searching for data structure + some algorithm of searching.


Collection View Displaying Only One Item From An Array


I have a collection view in a view controller.. There is one problem which i can't figure out. The custom cell in a collection view is displaying one item from an array.

Cant figure out what is missing in the code.. I have used both the delegate and data source method..

Here is the code i am using..

viewDidLoad()
pathRef.observeSingleEventOfType(.ChildAdded, withBlock: { (snapshot) in
            let post = CollectionStruct(key: snapshot.key, snapshot: snapshot.value as! Dictionary<String, AnyObject>)
            self.userCollection.append(post)

            let indexPath =  NSIndexPath(forItem: self.userCollection.count-1, inSection: 0)
            self.collectionView!.insertItemsAtIndexPaths([indexPath])

        })



 func collectionView(collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int {
    return userCollection.count
}

func collectionView(collectionView: UICollectionView, cellForItemAtIndexPath indexPath: NSIndexPath) -> UICollectionViewCell {
    let cell = collectionView.dequeueReusableCellWithReuseIdentifier("CollectionCell", forIndexPath: indexPath) as! CollectionViewCell

    let post = userCollection[indexPath.row]


    if let imageUrl = post.category{

        if imageUrl.hasPrefix("gs://"){

            FIRStorage.storage().referenceForURL(imageUrl).dataWithMaxSize(INT64_MAX, completion: { (data, error) in
                if let error = error {

                    print("Error Loading")
                }
                cell.userImg.image = UIImage.init(data: data!)
            })


        }else if let url = NSURL(string: imageUrl), data = NSData(contentsOfURL: url){

            cell.userImg.image = UIImage.init(data: data)
        }
    }
    return cell
}

I am trying to retrieve images stored in firebase database..


make a simple NSInteger counter thread safe


I define a NSInteger counter and updated its value in a callback like the following code shows (callback is in another thread):

-(void) myFunc {
  NSLog(@"initialise counter...");
  // I try to use volatile to make it thread safe
  __block volatile NSInteger counter = 0;

  [self addObserver:myObserver withCallback:^{
     // this is in another thread
     counter += 1;
     NSLog(@"counter = %d", counter);
  }];
}

I use volatile keyword to make the counter thread safe, it is accessed in a callback block which belongs to another thread.

When I invoke myFunc two times:

// 1st time call
[self myFunc];
// 2nd time call
[self myFunc];

the output is like this:

initialise counter...
counter = 1;
counter = 2;
counter = 3;
counter = 4;
counter = 1; // weird
initialise counter...
counter = 2; // weird
counter = 3;
counter = 1; // weird
counter = 4;

It looks like the 2nd time call produce a counter with wrong initial value, and the output before counter=4 is counter=1 which is also weird.

Is it because my code is not thread safe even with volatile keyword? If so, how to make my counter thread safe? If it is thread safe, why I get weird output?


Program triggers a breakpoint when run in debugger but works if run without debugger


I created a dll and it is getting attached with a server application. Now the problem is, if I run the server from the command prompt then the dll will be running fine. But if I debug the server in visual studio then the server will crash because of dll. Then I debugged it thoroughly and got to know that it is crashing while assigning the memory. I checked evry possible thing, memory overwrite, memory leak, but everything seems to be fine. Anyone encountered this type of problem before. Why is this happening? I searched on the internet also but all I am getting is " crashing in release mode and not in debug mode". EDIT: I am getting the following message on the window: Windows has triggered a breakpoint in tcas.exe. This may be due to a corruption of the heap, which indicates a bug in tcas.exe or any of the DLLs it has loaded. This may also be due to the user pressing F12 while tcas.exe has focus. The output window may have more diagnostic information. If I click on continue, then their wont be any problem. Edit: Sorry I forgot to mention that it is the debug build I am using and not the release build.

Visual Studio preprocessor only works if /P is set


I'm having an awkward problem in Visual Studio 2008. I'm trying to define a string-to-enum mapping using a config header (call it param_defines.h) file which looks something like this:

DEFINE_ITEM( A, BOOLEAN )
DEFINE_ITEM( B, INT )
DEFINE_ITEM( C, INT )

And so on. This is then referenced in a second header (enums.h) file:

enum ParamType
{
    BOOLEAN = 0,
    INT
};

enum Param
{
    UNKNOWN = -1
#define DEFINE_ITEM( NAME, TYPE ) ,NAME
#include "param_defines.h"
#undef DEFINE_ITEM
};

Then in a third (source) file I'm doing this:

#include "enums.h"
std::tr1::unordered_map<std::string, int> params;
#define DEFINE_ITEM( NAME, TYPE ) params[ #NAME ] = NAME
#include "param_defines.h"
#undef DEFINE_ITEM

When I compile the source file I a load of errors like:

error C2065: 'A': undeclared identifier
error C2065: 'B': undeclared identifier
error C2065: 'C': undeclared identifier

So something is going on with the preprocessor isn't quite doing what I want it to do.

The kicker is this. I set /P so I have some way of diagnosing what's going wrong. When I do this, the file compiles successfully.


mercredi 29 juin 2016

Executable binary used to run, now does nothing


I have a C++ program I've compiled and have tested and know works. The compiled program is called by a startup script and has been consistently working and showing output as expected. Somehow the executable has stopped working on startup. Even worse, when I try running the binary outside of the startup script it now finishes instantly with no output (it's a long program with lots of console messages even on failures).

This has now happened three times in the last week. Recompiling the program fixes this issue, but I don't want to have to do this multiple times per week. Any ideas on what's happening or how to fix it? (This is on an Intel Edison with Debian)

EDIT: I'm not sure if this matters, but this program will always be running when power is cycled, but not at a predictable state. Unfortunately, it won't be in an environment where I can do proper shutdowns. Power will be cut abruptly. Because the executable is not being edited while power is cycled I'm not sure how it would break that file.

EDIT: Here's how I start things in my script: (cd /root; bin/prog &; disown)

prog runs until power is interrupted


BFS taking twice the distance in undirected graph


could you help me out with this code(I got it from github because it's clear and follow the same logic that I do, also the same problem,I just added some details like predecessors that I'm using)? Well, I tried a lot of times but I couldn't figure out the problem, I don't understand why my bfs algorithm (and this one) the wrong distance in an undirected and weighted graph consider my graph is using matrix, but when I use bfs to travel from b to c, for example, I got 5 and not just 2 as the cost... and a to b I got 2, what is the right answer. a b c d a | 0 2 0 0 b | 2 0 0 3 c | 0 0 0 0 d | 0 3 0 0 Here is the code: const int branco = 0; const int cinza = 1; const int preto = 2; int bfs(int start, int target) { int color[countVertices]; queue<int> fila; dist = 0; predecessors.clear(); for (int i = 0; i < countVertices; i++) { color[i] = white; } color[start] = grey; fila.push(start); while (!fila.empty()) { int u = fila.front(); fila.pop(); if (u == target) return 1; for (int v = 0; v < countVertices; v++) { if (matriz[u][v] != 0 && color[v] == white) { color[v] = grey; fila.push(v); dist += matriz[u][v]; // sum weight predecessors.push_front(u); // predecessor } } color[u] = black; } return 0; }

Animate UIView along BezierPath with delay


I want to animate a UIView in a figure 8 motion. I am doing this in Swift using BezierPath but I want to add a delay to the animation. let center: CGPoint = CGPoint(x: 60, y: 45) let cent: CGPoint = CGPoint(x: 90, y: 45) let radius: CGFloat = 15.0 let start: CGFloat = CGFloat(M_PI) let end: CGFloat = 0.0 let path1: UIBezierPath = UIBezierPath(arcCenter: center, radius: radius, startAngle: start, endAngle: end, clockwise: true) let path2: UIBezierPath = UIBezierPath(arcCenter: cent, radius: radius, startAngle: -start, endAngle: end, clockwise: false) let path3: UIBezierPath = UIBezierPath(arcCenter: cent, radius: radius, startAngle: end, endAngle: -start, clockwise: false) let path4: UIBezierPath = UIBezierPath(arcCenter: center, radius: radius, startAngle: end, endAngle: start, clockwise: true) let paths: UIBezierPath = UIBezierPath() paths.appendPath(path1) paths.appendPath(path2) paths.appendPath(path3) paths.appendPath(path4) let anim: CAKeyframeAnimation = CAKeyframeAnimation(keyPath: "position") anim.path = paths.CGPath anim.repeatCount = 2.0 anim.duration = 3.0 view.layer.addAnimation(anim, forKey: "animate position along path") The figure 8 works just fine, but I have no idea how to add a delay. I have tried using methods like animateWithDuration(delay:options:animation:completion:) but that didn't help. If I am way off base and there is an easier way to animate a UIView in a 'Figure 8'/'Inifinity Loop' motion that would be fine. I just need the motion coupled with the delay.

Segmentation fault manipulating map c++


I don't understand why I get segmentation fault on this code inside the "if" conditional, and when is out out it, it works perfectly.. The map cidades starts empty. Here is the code: int i,cont = 0; // counters string line; map<string,int> cidades; ... open a txt file here ... while(getline(element, line, ',')){ if(i == 1) { std::map<std::string, int>::iterator it = cidades.find(line); // get the error when I try to print here // without the line bellow, there is no error //it looks like the if is always true cout << "ele: " << it->first << " key: " << it->second; if(it == cidades.end()){ // insert map cidades.insert(pair<string,int>(line,cont)); c1 = cont; }else{ // get the key c1 = cidades.at(line); } } cont++; i++; } But when I put the same code out of the conditionals above.. it works string a = "teste"; // taken from the txt and added into the map std::map<std::string, int>::iterator it = cidades.find(a); cout << "ele: " << it->first << " chave: " << it->second; it prints teste and the key... but when I try to print inside the if I got segmentation fault(core dumped).

Detail View UITableView - Pass data before segue


So in my storyboard file I have a segue from one UITableViewController going to another so that when the user taps the cell it opens the next table. Nothing special. But when they tap the cell I need to specify the data being loaded into the next view. I do this in the didSelectRowAtIndexPath. Depending on the indexPath.row I set the value of a variable that is assigned to the detail Table view in the prepareForSegue function. I have the view controller loaded as a variable in let detailViewController = segue.destinationViewController as DetailTableViewController. From here is set the data with an assignment function. However, when I tap the table the data does not show up until I press back and tap it again. Basically the data is being assigned after the segue... How can I assign the data and perform the segue afterwards?

EDIT

    override func tableView(tableView: UITableView, didSelectRowAtIndexPath indexPath: NSIndexPath) {
        switch indexPath.row {
        case 0:
            self.passedInfo = self.infoOne;
            break
        case 1:
            self.passedInfo = self.infoTwo;
            break
        case 2:
            self.passedInfo = self.infoThree;
            break
        default:
            break;
        }
    }

    override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject!) {
        let detailViewController = segue.destinationViewController as DetailTableViewController
        let destinationTitle = "Detail View"
        detailViewController.title = destinationTitle
        self.passedScores.sort{$0 > $1};
        detailViewController.setData(self.passedInfo);
    }

PJSIP linking error on Ubuntu 12.04 [on hold]


$ make
...
/Develop/3rdParty/pjproject-2.5.1/pjmedia/lib/libpjmedia-audiodev-i686-pc-linux-gnu.a(alsa_dev.o): In function `alsa_factory_refresh':'
alsa_dev.c:(.text+0x28e): undefined reference to `snd_device_name_hint'
alsa_dev.c:(.text+0x2b0): undefined reference to `snd_lib_enter code hereerror_set_handler'
alsa_dev.c:(.text+0x2de): undefined reference to `snd_device_name_get_hint'
alsa_dev.c:(.text+0x322): undefined reference to `snd_lib_error_set_handler'
alsa_dev.c:(.text+0x32e): undefined reference to `snd_device_name_free_hint'
alsa_dev.c:(.text+0x384): undefined reference to `snd_pcm_open'
alsa_dev.c:(.text+0x396): undefined reference to `snd_pcm_close'
alsa_dev.c:(.text+0x3b6): undefined reference to `snd_pcm_open'
alsa_dev.c:(.text+0x3d0): undefined reference to `snd_pcm_close'
collect2: error: ld returned 1 exit status
make[2]: *** [../bin/pjmedia-test-i686-pc-linux-gnu] Error 1
make[2]: Leaving directory `/Develop/3rdParty/pjproject-2.5.1/pjmedia/build'
make[1]: *** [pjmedia-test-i686-pc-linux-gnu] Error 2
make[1]: Leaving directory `/Develop/3rdParty/pjproject-2.5.1/pjmedia/build'
make: *** [all] Error 1
$

First it was complaining about missing header asoundlib.h, but the error disappeared after installing the libasound2-dev package. Now I see a linking error caused by alsa_dev.c.

Can you please help?

Thank you.

Regards, Serge


Converting integers to floating point numbers: performance considerations


I have a complex set of template functions which do calculations in a loop, combining floating point numbers and the uint32_t loop indices. I was surprised to observe that for this kind of functions, my test code runs faster with double precision floating point numbers than with single precision ones.

As a test, I changed the format of my indices to uint16_t. After this, both the double and float version of the program were faster (as expected), but now the float version was significantly faster than the double version. I also tested the program with uint64_t indices. In this case the double and the float version are equally slow.

I imagine that this is because an uint32_t fits into the mantissa of a double but not into a float. Once the indices type was reduced to uint16_t, they also fit into the mantissa of a float and a conversion should be trivial. In case of uint64_t, the conversion to double also needs rounding, which would explain why both versions perform equally.

Can anybody confirm this explanation?

EDIT: Using int or long as index type, the program runs as fast as for unit16_t. I guess this speaks against what I suspected first.


Are there any standards for interpreting PHAdjustmentData?


I'm trying to build a photo editing extension in iOS. I understand the pipeline of how the existing edits to a photo can be interpreted by the app, but from what I've read there isn't much built on how to interpret the PHAdjustmentData.

For instance, it comes with a formatIdentifier, formatVersion, and an arbitrary data property. I understand that the data property can be interpreted as a serialized object, but are there any standards that can be used to identify common filters? Or what about third party filters? Perhaps some of these are system defined filters that must be queried for and use the same settings to reproduce the history of the image.

For example, if I edit one photo before calling my extension, I'll get the canHandle(_ adjustMentData:) -> Bool message. Printing out that object shows the following.

(lldb) po adjustmentData
<PHAdjustmentData: 0x600000055390> identifier=com.apple.photo version=1.2 data=0x6000001a8b20 (204)

How does one go about interpreting this? Clearly the iOS Photos app is the identifier, but the NSData itself could be anything. I'm sure it could be a dictionary of CIFilter property settings or anything else.

Are there any standards developing to concretely identify this historical data?


Passing by reference and returning by (bool or reference)?


There are two ways to achieve the same behavior: Passing an array to the function, wait till read and write operations on array are done, then go on from where the function was called.
Originally I returned an array created inside Func which is undefined behavior. So whats the better way having these two options (if good at all):

typedef int array[100];
array& Func1(int (&array)[100]) {// option 1
    // read write on array
    return array;
}

bool Func2(int (&array)[100]) {// option 2
    // read write on array
    return true;
}

int main() {
    int a[100];
    a = Func(a);// fail
    if (Func(a)) { 
        //continue 
    }

}

Update
With the typedef I manage to return arrays, but as theyre not assignable I should use a vector as pointed out in the comments.
So it boils down to: How to make sure Func3 returns before Func4 starts.

//use refrence to prevent decay to pointer (sizeof etc.)
void Func3(int (&array)[100]) {// option 3
    // read write on array, takes long
}

void Func4(int (&array)[100]) {// option 4
    // read write on array modified by Func3
}

int main() {
    int a[100];
    Func3(a);
    Func4(a);

}

Also the caller needs the function to finish before he can continue.


CMake Gcov c++ creating wrong .gcno files


I have a CMakeLists.txt file in which I added:

set(CMAKE_CXX_FLAGS "-fprofile-arcs -ftest-coverage -pthread -std=c++11 -O0 ${CMAKE_CXX_FLAGS}")

It is generating the report files in:

project_root/build/CMakeFiles/project.dir/

BUT the files it generates have extentions .cpp.gcno, .cpp.gcda and .cpp.o.

Also, they are not in the same folder as the src files, which are at:

project_root/src/ 

When I move the report files to the src/ folder and execute

$ gcov main.cpp
main.gcno:cannot open notes file

But I get that error message. So I change the .cpp.gcno, .cpp.cdna and cpp.o to .gcno, .gcda and .o and finally I get the following:

gcov main.cpp
Lines executed:86.67% of 15
Creating 'main.cpp.gcov'

I have over 50 files and can't do this manually for each one.

I need to be able to run gcov once for all files and generate report for all files. I don't care where the files are generated.


Initialize a const class member with a default value


I have two classes A and B. bclass of type B is a constant member of class A; what I want to do is to initialize class bclass with default values if no B object is provided to A. Something like this:

#include <iostream>
#include <string>
#include <unistd.h>

using namespace std;

class B{
public:
  B(string Bs): Bstring(Bs){
    cout << "B constructor: " << Bstring << endl;
  }

  ~B(){
    cout << "B destructor: " << Bstring << endl;
  }

private:
  const string Bstring;
};

class A{
public:
  A(const B subb = B("mmmmm")): bclass(subb){
    cout << "A constructor." << endl;
  }

  ~A(){
    cout << "A destructor." << endl;
  }

private:
  const B bclass;
};

int main(void){
    A a;
    cout << "doing work..." << endl;
    sleep(2);
    return 0;
}

The output is:

B constructor: mmmmm
A constructor.
B destructor: mmmmm
doing work...
A destructor.
B destructor: mmmmm

The thing is that I'm constructing 2 B classes(?) when only one is needed! And somehow, B constructor is called only once, while the destructor is called twice... What is going on?!


Moving a UIView while changing its nested UILabel causes the view to jump back to initial position


I'm trying to set up a UISlider so that when the slider is moved, a bubble appears over the thumb rectangle to show what the current value is set to.

Moving the view on its own works just fine, but when altering the value of the label inside that view, the label will quickly 'jump' back to the initial location that I placed the UIView on the storyboard when the slider hits certain points on the track. It then jumps back as soon as the thumb rectangle moves past that 1 pixel on the track.

I've made a sample project that replicates the issue here: https://github.com/austinmckinley/SliderBubbleTest

Alternatively, here's what my ViewController looks like.

import UIKit

class ViewController: UIViewController {
    @IBOutlet weak var slider: UISlider!
    @IBOutlet weak var bubble: UIView!
    @IBOutlet weak var bubbleLabel: UILabel!

    override func viewDidLoad() {
        super.viewDidLoad()
    }

    override func didReceiveMemoryWarning() {
        super.didReceiveMemoryWarning()
    }

    @IBAction func sliderMoved(sender: UISlider) {
        let sliderValue = lroundf(sender.value)

        let trackRect = sender.trackRectForBounds(sender.frame)
        let thumbRect = sender.thumbRectForBounds(sender.bounds, trackRect: trackRect, value: Float(sliderValue))
        bubble.center.x = thumbRect.midX

        slider.value = Float(sliderValue)

        // If this next line is commented, the jumping issue does not occur.
        bubbleLabel.text = String(sliderValue)
    }
}

Does operator ',' always returns the second argument?


In GCC

#include <iostream>
int main() {
  if(1 == 2, true) {
    std::cout << "right" << std::endl;
  } else std::cout << "left" << std::endl;
  return 0;
}

it output 'right', is it always so?


Can compiler just optimize out the left operand, as it didn't used?

warning: left operand of comma operator has no effect [-Wunused-value]
   if(1 == 2, true) {
      ~~^~~~

I have some code like this:

if(doSomethingHereWhichAlwaysReturnsTrue,
     doSomeOtherHereAndDependOnTheResultExecuteBodyOrNot) {
  ..body.. - execute if 'doSomeOther' returns true
}

Through this code is debug only, i wonder i can use such a construction in the release. I guess no.


To not ask twice, i'm also use sometimes assignment chaining like:

int i, j, k, l;
i = j = k = l = 0;

is it safe?

I heard once that the execution order is undefined and so this is an undefined behaviour. And as UB it can be clearly optimized out by the compiler, but using '-O3 -Wall -pedantic' i see no warnings with it, and the expected result, so i guess there no problems here.


ffmpeg c/c++ get frame count or timestamp and fps


I am using ffmpeg to decode a video file in C. I am struggling to get either the count of the current frame I am decoding or the timestamp of the frame. I have read numerous posts that show how to calculate an estimated frame no based on the fps and frame timestamp, however I am not able to get either of those. What I need: fps of video file, timestamp of current frame or frame no(not calculated) What I have: I am able to get the time of the video using pFormatCtx->duration/AV_TIME_BASE I am counting the frames currently as I process them, and getting a current frame count, this is not going to work longterm though. I can get the total frame count for the file using pFormatCtx->streams[currentStream->videoStream]->nb_frames I have read this may not work for all streams, although it has worked for every stream I have tried. I have tried using the time_base.num and time_base.den values and packet.pts, but I can't make any sense of the values that I am getting from those, so I may just need to understand better what those values are. Does anyone know of resources that show examples on how to get this values?

Why is new used in C++ to get more memory when declaring a variable is doing the same thing? [duplicate]


This question already has an answer here: when to use new in C++? 6 answers I can't seem to understand the difference between using new to get more memory and just declaring a variable. here is where I read it in a book called jumping into C++: Getting more memory with new Dynamic allocation means requesting as much (or as little) memory as you need, while your program is running. You program will calculate the amount of memory it needs instead of working with a fixed set of variables with a particular size. This section will provide the foundation of how to allocate memory, and subsequent sections will explain how to fully take advantage of having dynamic allocation. First let’s see how to get more memory. The keyword new is used to initialize pointers with memory from the free store. Remember that the free store is a chunk of unused memory that your program can request access to. Here's the basic syntax: int *p_int = new int; The new operator takes an "example" variable from which it computes the size of the memory requested. In this case, it takes an integer, and it returns enough memory to hold an integer value. So what I don't understand is that how is: int *p_int = new int; any different to: int *p_int;

Compiler doesn't recognize a copy constructor of a class within a class


I've written code for a tuple class (of integers) which also defines an iterator class for iterating over the tuple. The iterator has two elements- a pointer to the tuple it originated from and an integer denoting the current position of the iterator. However, the compiler doesn't recognize the copy constructor for the iterator (whether I use the default one or write one myself). Here's the code: http://ideone.com/nVNDvC or if the link doesn't work:

class tuple {
    int* array;
    int size;
public:
    tuple(int* a, int s) : array(new int[s]), size(s) {
        for(int i=0 ; i<s; ++i)
            array[i]=a[i];
    }
    ~tuple() { delete[] array;}
    tuple (const tuple& t) : tuple(t.array, t.size) { };

    class iterator {
        tuple* origin;
        int index;
        iterator(tuple* o, int i) : origin(o), index(i) { }
        friend class tuple;
    public:
        iterator(const iterator&) = default;
    };
    iterator begin() { return iterator(this, 0); }
    iterator end() { return iterator(this, size); }
};

int main() {
    int myarray[2] = {42, 43};
    tuple mytuple(myarray, 2);
    tuple::iterator iterator1 = mytuple.begin();
    tuple::iterator iterator2 = mytuple.end();
    iterator2(iterator1);
    return 0;
}

I get the error:

prog.cpp: In function 'int main()':
prog.cpp:29:21: error: no match for call to '(tuple::iterator) (tuple::iterator&)'
  iterator2(iterator1);

Implicit conversion from class to std::ofstream not working as expected


I have a problem with getting an implicit conversion from my class "File" to an std::ofstream.

Here is the interesting stuff from the File.h:

class File
{
    public:
        File(const std::string path);
        operator std::ofstream&();

    private:
        std::ofstream _ofs;
        std::string _path;
}

And here is the File.cpp:

File::File(const std::string path)
    : _path(path)
{}

File::operator std::ofstream&()
{
    if (!_ofs.is_open()) _ofs.open(_path);
    return _ofs;
}

When I try to use the file with the << operator like this:

File file("test.txt");
file << "test";

g++ gives me the compile time error "Invalid operands to binary expression ('File' and 'const char *')". However, if instead of the "const char *" I use a class I've written my own << operator for (with operands "std::ostream&" and the class, it works as expected. E.g:

class Color;
std::ostream& operator<<(std::ostream&, const Color&);

Color color;
File file("file.file");
file << color;

This works as expected. So the conversion to std::ofstream seems to work, but somehow the << operator for "std::ostream" and "const char *" is not.


CorePlot - sync ranges of 2 graphs


I've created 2 separate plots and now I wanted to detect when user changes xRange of one of them (scrolling) and change the xRange of second plot.

I've been trying using plotSpace:willChangePlotRangeTo:forCoordinate: and posting notification with new range and plotID in userInfo. Parent viewController listens to the notification and changes xRange of the second plot, BUT: I receive lag, the second plot is "shaky" and very often it ends up with different range. When I do it very fast, shakiness is not observed (only lag).

How can I solve this?

In parentViewController:

 -(void)receivedNewRangeNotif:(NSNotification*)notification {
    NSDictionary* userInfo = notification.userInfo;
    NSString* identifier = [userInfo objectForKey:@"identifierKey"];
    CPTPlotRange* newRange = [userInfo objectForKey:@"newRangeKey"];

    NSLog(@"receivedNewRangeNotif: %@",identifier);
    if ([identifier isEqualToString:@"firstItemID"]) {
        _secondItem.postNotifications = NO;
        CPTXYPlotSpace *secondPlotSpace = (CPTXYPlotSpace *)_secondItem.graph.defaultPlotSpace;
        secondPlotSpace.xRange = [CPTPlotRange plotRangeWithLocation:newRange.location length:newRange.length];
        _secondItem.postNotifications = YES;
    }
    else if ([identifier isEqualToString:@"secondItemID"]) {
        _firstItem.postNotifications = NO;
    CPTXYPlotSpace *plotSpace = (CPTXYPlotSpace *)_firstItem.graph.defaultPlotSpace;
    plotSpace.xRange = [CPTPlotRange plotRangeWithLocation:newRange.location length:newRange.length];
        _firstItem.postNotifications = YES;
    }

In plotItem:

- (CPTPlotRange *)plotSpace:(CPTPlotSpace *)space willChangePlotRangeTo:(CPTPlotRange *)newRange forCoordinate:(CPTCoordinate)coordinate {
    if (_postNotifications) {
    NSDictionary *userInfo = [[NSDictionary alloc] initWithObjectsAndKeys:newRange, @"newRangeKey",
                                self.identifier, @"identifierKey", nil];

    NSLog(@"sendingNewRangeNotif");
    [[NSNotificationCenter defaultCenter] postNotificationName:@"TODOnotifRangeChanged" object:nil userInfo:userInfo];
 }


    return newRange;
}

Null Data field in cv::Mat object


I am quite new at OpenCV. I am running into quite a puzzle ( from my naive perspective...)

I am trying to set a region of a zero matrix to ones. In essence do the following:

Mat a = Mat::zeros(10, 10, CV_8UC1);
Mat b = Mat::ones(3, 3, CV_8UC1);

Range h = Range(2, 5);
Range w = Range(2, 5);

b.copyTo(a(h, w));

I've checked the output of this exact code, and it works fine. The problem comes in when i try to do this in my actual code:

int key, top,left,bottom,right;
Mat blackImg = Mat::zeros(imgHeight, imgWidth, CV_8UC1);
Mat whiteBox = Mat::ones(patternHeight, patternWidth, CV_8UC1);
while (true) {
    blackImg = Mat::zeros(patternHeight, patternWidth, CV_8UC1);

    top = patternPosY;
    bottom = patternPosY + patternHeight;
    left = patternPosX;
    right = patternPosX + patternWidth;

    Range h = Range(top, bottom);
    Range w = Range(left, right);

    whiteBox.copyTo(blackImg(h, w));

    imshow("Pattern", blackImg);
    // key inputs
    key = waitKey(30);

    if (key == 27) {
        break;
    }
}

However, the data field of the blackImg Mat object is NULL and remains so. This in turns leads to a memory error obviously. I have checked the values of top, bottom, right, and left and they are within bounds.

I am sure I'm missing something basic, and it would be infinitely helpful if someone could point it out.


How to have a deadlock scenario with boost MPI (I use MPICH compiler)?


I am trying to find out in what cases a potentially blocking boost mpi "send" will actually block and causes a deadlock.

#include <boost/mpi.hpp>
#include <iostream>

int main(int argc, char *argv[])
{
  boost::mpi::environment env{argc, argv};
  boost::mpi::communicator world;
  if (world.rank() == 0)
  {
    char buffer[14];
    const char *c = "Hello, world from 1!";
    world.send(1, 1, c, 13);
    std::cout << "1--11111n";
    world.send(1, 1, c, 13);
    std::cout << "1--22222n";
    world.recv(1, 1, buffer, 13);
    std::cout << "1--33333n";
    world.recv(1, 1, buffer, 13);
    std::cout << "1--44444n";
     buffer[13] = '�';
    std::cout << buffer << "11 n";
  }
  else
  {
    char buffer[14];
    const char *c = "Hello, world from 2!";
    world.send(0, 1, c, 13);
    std::cout << "2--11111n";
    world.send(0, 1, c, 13);
    std::cout << "2--22222n";
    world.recv(0, 1, buffer, 13);
    std::cout << "2--33333n";
    world.recv(0, 1, buffer, 13);
    std::cout << "2--44444n";
    buffer[13] = '�';
    std::cout << buffer << "22 n";
  }
}

but it runs just fine and the with this order:

2--11111
2--22222
1--11111
1--22222
1--33333
1--44444
Hello, world 11 
2--33333
2--44444
Hello, world 22

I will be grateful if someone could give me a scenario in which I have actually deadlock. How does the potentially blocking boost mpi works?

Thank you.


signin to iTunes store popup keep asking try to use the app


now I’m facing this issue in phonegap after implementing IAP. This prompt keep showing from the beginning to everywhere in the app. I tried these two plugins I’m facing this problem. These are the below mentioned plugins are used myself but same issue. Now I tried many ways I can’t fix. plugin1: https://github.com/j3k0/PhoneGap-InAppPurchase-iOS.git Issue:https://github.com/j3k0/cordova-plugin-purchase/issues/427 /357 /79 plugin2:https://github.com/AlexDisler/cordova-plugin-inapppurchase Issue:https://github.com/AlexDisler/cordova-plugin-inapppurchase/issues/43 Log details when issue occur: securityd[88] : CFPropertyListReadFromFile file file:///Library/Keychains/accountStatus.plist: The operation couldn’t be completed. (Cocoa error 260.) securityd[88]: CFPropertyListReadFromFile file file:///Library/Keychains/accountStatus.plist: The operation couldn’t be completed. (Cocoa error 260.) storebookkeeperd[95] : [UPP-SBDPlaybackPositionStorageController] clearing all local changes that had been scheduled for push storebookkeeperd[95] : [UPP-SBDPlaybackPositionStorageController] reseting sync anchor to 0, and scheduling pull from server storebookkeeperd[95] : [UPP-SBDPlaybackPositionStorageController] target sync date from client: 2016-06-23 14:45:46 +0000 (in 4.99 sec) storebookkeeperd[95] : [UPP-SBDPlaybackPositionStorageController] setting target date to: 2016-06-23 14:45:46 +0000 (in 62625518058.29 sec) storebookkeeperd[95] : [UPP-SBDPlaybackPositionStorageController] scheduling sync (via BackgroundTaskJob) 4.991308 seconds from now... storebookkeeperd[95] : [UPP-SBDPlaybackPositionStorageController] clearing all local changes that had been scheduled for push storebookkeeperd[95] : [UPP-SBDPlaybackPositionStorageController] reseting sync anchor to 0, and scheduling pull from server storebookkeeperd[95] : [UPP-SBDPlaybackPositionStorageController] target sync date from database: 2016-06-23 14:45:46 +0000 (in 4.91 sec) storebookkeeperd[95] : [UPP-SBDPlaybackPositionStorageController] scheduling sync (via BackgroundTaskJob) 4.920134 seconds from now... This is a screenshot of the issue:

Compiling simple C++ program with dependencies [duplicate]


I'm new to C++ (coming from the Java world), and am writing a simple toy program to learn the syntax. I've written a simple database class (in db_database.h and db_database.cpp) and a main.cpp file that interacts with the database class.

Upon compiling using the command g++ main.cpp -o main, I get the following error:

Undefined symbols for architecture x86_64:
  "db::Database::print_contents()", referenced from:
      _main in main-5f85bd.o
  "db::Database::add_entry(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)", referenced from:
      _main in main-5f85bd.o
  "db::Database::Database()", referenced from:
  _main in main-5f85bd.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see     invocation)

I'm assuming that I'm simply not addressing the fact that main.cpp includes db_database.h.

How do I get this to compile?


Arduino UNO error


I got a problem with this code from Arduino Projects Book, a very simple code soryy if is very obvius.

This is the code I wrote:

const int greenLEDpin = 9;
const int redLEDpin = 10;
const int blueLEDpin = 11;

const int redSensorpin = A0;
const int greenSensorpin = A1;
const int blueSensorpin = A2;

int redValue = 0;
int greenValue = 0;
int blueValue = 0;

void setup() {
  Serial.begin(9600);

  pinMode(greenLEDpin,OUTPUT);
  pinMode(redLEDpin,OUTPUT);
  pinMode(blueLEDpin,OUTPUT);

}

void loop() {

  redSensorValue = analogRead(redSensorpin);
  delay (5);
  greenSensorValue = analogRead(greenSensorpin);
  delay(5);
  blueSensorValue = analogRead(blueSensorpin);

  Serial.print("Raw Sensor Values t Red: ");
  Serial.print(redSensorValue);
  Serial.print("t Green: ");
  Serial.print(greenSensorValue);
  Serial.print("t Blue: ");
  Serial.println(blueSensorValue);

  redValue = redSensorValue/4;
  greenValue = greenSensorValue/4;
  blueValue = blueSensorValue/4;

  Serial.print("Mapped Sensor Values t ReD: ");
  Serial.print(redValue);
  Serial.print("t Green: ");
  Serial.print(greenValue);
  Serial.print("t Blue: ");
  Serial.print(blueValue);
  analogWrite(redLEDpin, redValue);
  analogWrite(greenLEDpin, greenValue);
  analogWrite(blueLEDpin, blueValue);
}

And here is the error: Arduino:1.7.10 (Windows 8.1), Placa:"Arduino Uno"

LED_tricolor.ino: In function 'void loop()':

LED_tricolor.ino:24:2: error: 'redSensorValue' was not declared in this scope

LED_tricolor.ino:26:2: error: 'greenSensorValue' was not declared in this scope

LED_tricolor.ino:28:2: error: 'blueSensorValue' was not declared in this scope

Someone knows whats happening here? I tried some things like puting the variables before, but nothing... Hope u guys can help me ^^.


IFTTT-like Service API


I'm developing an app that amalgamates all of your notifications from various sources and presents them in an organized fashion. It's something like HomeKit, but not limited to only smart home-based notifications. Essentially it's a convenience app, and means that instead of having ten different apps that provide notifications, you have one app that centralizes the whole thing, ranging from HomeKit-like functionality, to telling you if you have a text message or email, to telling you if your favorite news source or twitter account posts something, to telling you if you've got something planned on your calendar, to even telling you if your home computer completed its backup successfully. So far, my first step is integrating IFTTT into it, e.g. something like notifying you if your house turns on its lights at sunset. However, while I found something called "Maker" on their website, it seems you have to create custom recipes in order to utilize it, and I would like it to be compatible with existing recipes. Is there any such SDK or API that will allow me to receive that information? Thank you in advance!

Side note: While yes, I realize that this is somewhat of a vague question, I could not think of an alternative forum to post this on, and if you have any suggestions for a more appropriate post location, I will gladly consider them.


Compiler doesn't like my class C++. getting undeclared identifier error and more


I created a class called "Message". I want to store Messages that are created with the class "Message" in a static vector array in a class named "MessageBox". The compiler tells me that Message doesn't exist, but the editor is telling me otherwise. Here are the files with the code:

"Message.h"

#pragma once
#include <iostream>
#include <string>
#include "Message Box.h"

namespace ATE {
    class Message
    {
    public:
        Message(std::string act, std::string ID, std::string IDtwo) { action = act, ID1 = ID, ID2 = IDtwo; }
        Message(std::string act, std::string ID) { action = act, ID1 = ID; }

        std::string action;
        std::string ID1;
        std::string ID2 = nullptr;

    };

}

"Message Box.h"

#pragma once
#include <string>
#include <vector>
#include "Message.h"

namespace ATE {
    class MessageBox
    {
    public:
        static std::vector<Message> MsgBox;
        void addMessage(Message msg);

    };
}

"Message Box.cpp"

#include "Message Box.h"

void ATE::MessageBox::addMessage(Message msg)
{
     MsgBox.push_back(msg);
}

My errors:

Error C2065 'Message': undeclared identifier (file: message box.h, line: 11)

Error C2923 'std::vector': 'Message' is not a valid template type argument for parameter '_Ty' (file: message box.h, line: 11)

Error C2061 syntax error: identifier 'Message' (file: message box.h, line: 12)

Help is much appreciated (:


mardi 28 juin 2016

CUDA separate compilation + building and using a static library = binary linking trouble


I'm using CUDA 7.5 with GCC 4.9.3, and building a binary using CMake. First there's a library I add with cuda_add_library; then there's the cuda_add_executable - which uses some direct g++ compilation and some nvcc compilation. The compilation itself passes fine, but during the final linking of the binary, which uses g++ rather than nvcc, I get the error message: libktkernels.a(ktkernels_intermediate_link.o): In function `__cudaRegisterLinkedBinary_66_tmpxft_00007a5f_00000000_16_cuda_device_runtime_compute_52_cpp1_ii_8b1a5d37': /tmp/tmpxft_00003cf9_00000000-2_ktkernels_intermediate_link.reg.c:25: multiple definition of `__cudaRegisterLinkedBinary_66_tmpxft_00007a5f_00000000_16_cuda_device_runtime_compute_52_cpp1_ii_8b1a5d37' CMakeFiles/tester.dir/tester_intermediate_link.o:/tmp/tmpxft_00003d0f_00000000-2_tester_intermediate_link.reg.c:24: first defined here collect2: error: ld returned 1 exit status In case you're not a CMake person, I'll mention that the local library, libktkernels.a is generated from the object code like so: /usr/bin/ar cq libktkernels.a lots.cu.o of.cu.o files.cu.o here.cu.o Now, it's true that I have some cude which is compiled both into the library and into the binary directly. But is that what triggers this error message, or can it be something else? What is that symbol that's defined multiple times? Can I demangle it somehow to realize what it's associated with? It's not quite clear to me whether the problem

Bison does not appear to recognize C string literals appropriately


My problem is that I am trying to run a problem that I coded using a flex-bison scanner-parser. What my program is supposed to do is take user input (in my case, queries for a database system I'm designing), lex and parse, and then execute the corresponding actions. What actually happens is that my parser code is not correctly interpreting the string literals that I feed it.

Here's my code:

130 insertexpr :  "INSERT" expr '(' expr ')'
131 
132                  {
133                         $$ = new QLInsert( $2, $4 );
134                          }
135                         ;

And my input, following the "Query: " prompt:

Query: INSERT abc(5);
input:1.0-5: syntax error, unexpected string, expecting end of file or end of line or INSERT or ';'

Now, if I remove the "INSERT" string literal from my parser.yy code on line 130, the program runs just fine. In fact, after storing the input data (namely, "abc" and the integer 5), it's returned right back to me correctly.

At first, I thought this was an issue with character encodings. Bison code needs to be compiled and run using the same encodings, which should not be an issue seeing as I am compiling and running from the same terminal.

My system details:

Ubuntu 8.10 (Linux 2.6.24-16-generic)
flex 2.5.34
bison 2.3
gcc 4.2.4

If you need any more info or code from, let me know!


Using FCM token obtained via batchImport (iOS)


I am trying to migrate an existing app to use FCM. I took the APNS token and sent it to the "batchImport" service, using curl:

curl -H "Authorization: key=<auth key>" -H "Content-Type: application/json" -X POST -d "{"application": "com.myco.myapp", "sandbox": false, "apns_tokens": ["410564ffd0aaf91dd06e8ab7b8362238e2c7f1bbd5a520d6afaff38c9b670a90"] }" https://iid.googleapis.com/iid/v1:batchImport

I receive a "registration_token" in response. When I then try to use that token to request a push notification, it does not arrive on the device. Here's the curl from that:

curl -H "Authorization: key=<Auth key>" -H "Content-Type: application/json" -d "{"to":"<registration_token_from_above>", "notification":{"body":"First", "title":"Num 1"}}" -X POST https://fcm.googleapis.com/fcm/send

I am also unable to send from the "Notification" tool in the Firebase console.

I created a second project from scratch from the example here: https://github.com/firebase/quickstart-ios.git . This one works from both the Firebase console and curl.

Is there something magical happening in the Firebase client code that doesn't happen when I use the batchImport service? If so, how in the world would you migrate from a different service to FCM?


Is this macro statement legal C++ or something else? And if it is legal how does it work


WebKit has a lot of pre-processor lines like this: #if MACRO1(MACRO2)

For example:

#if PLATFORM(MAC) || (PLATFORM(QT) && USE(QTKIT))
#include "MediaPlayerPrivateQTKit.h"
#if USE(AVFOUNDATION)
#include "MediaPlayerPrivateAVFoundationObjC.h"
#endif
...

So my first thought was that they were function-like macros, but I can't see how that would work, and I couldn't find any #defines for these macros anywhere in the source code.

I asked another engineer what it was and he's never seen multiple macros used like that inside a #if before either. I found this wiki page that talks about them but it still wasn't clear to me where they come from,

So my question then: Is this valid C++ or is it being replaced in the code by another tool/language like CMake or something else, and if it is valid C++ is there a spec anyone is aware of that talks about this?

I'm a support engineer for a C++ Static Analysis tool that isn't handling this syntax. A customer asked us to handle it, but if I'm going to take this to the senior engineer I'd like to not sound like an idiot :) So I'd like the nitty gritty if anyone knows it.


collection view pan gesture


I have a collection view cell: class MyTripsCollectionViewCell: UICollectionViewCell, UIGestureRecognizerDelegate { that implements the UIGestureRecognizerDelegate I added pan gesture recognizer : override func awakeFromNib() { let gestureRecognizer = UIPanGestureRecognizer(target: self, action:#selector(MyTripsCollectionViewCell.handlePan(_:))) gestureRecognizer.delegate = self self.addGestureRecognizer(gestureRecognizer) } below is the handlePane Function : func handlePan(recognizer: UIPanGestureRecognizer) { if (recognizer.state == UIGestureRecognizerState.Began) { // if the gesture has just started, record the current centre location originalCenter = self.center } if (recognizer.state == UIGestureRecognizerState.Changed) { // translate the center if recognizer.isLeft(self) { let translation = recognizer.translationInView(self) if self.center.x >= 150 { self.center = CGPointMake(originalCenter.x + translation.x, originalCenter.y); } // determine whether the item has been dragged far enough to initiate a delete / complete deleteOnDragRelease = self.frame.origin.x == 150 ? false : true } } // 3 if (recognizer.state == UIGestureRecognizerState.Ended) { // the frame this cell would have had before being dragged let originalFrame = CGRectMake(0, self.frame.origin.y, self.bounds.size.width, self.bounds.size.height); if (!deleteOnDragRelease) { // if the item is not being deleted, snap back to the original location UIView.animateWithDuration(0.2) { self.frame = originalFrame; } } } } In this code I am trying to move the collection view cell from left to right,until it's cente reach 150 point, this work now when I move my finger slowly, but the problem is when I move my finger very fast the translation take a large value which make the cell move more than 150 and then stop cause it make the condition be true so how I can fix the problem of the speed of panning?

Get original record from CKReference


I have a record type that has two fields...both fields are just CKReferences to two other record types. It's worked fine so far but I just came across a need to get the original records from the CKReference record. Since I've already retrieved the record containing the references and stored it in a dictionary, I was trying to get the original record from within the reference object via the key. However, I wasn't getting the data I was expecting so I NSLogged the class type saw it coming out as a CKReference instead of the original class (record) type. I'm trying to avoid making another database (cloudkit) call to get the records I need. The keys in the dictionary are the record ids of the original records from the CKReference records, so I already have the correct record ids.

So, is this even possible, or will I be forced to just take the record ids and make a CKFetchRecordsOperation call?

Here's my code for processing the record that contains the two CKReference records...

UserActivity *userActivity = [[UserActivity alloc] init];
                    CKReference *cidReference = [[CKReference alloc] initWithRecord:record[IMAGE_DATA_RECORD_TYPE] action:CKReferenceActionNone];
                    //NSLog(@"RecordID of cidReference: %@", cidReference.recordID.recordName);
                    ImageData *imageData = (ImageData *)cidReference;

It's the imageData object that does not contain the expected data, which is because it's not the right class...it's a CKReference class.

Thanks in advance!


Best Practice - Consolidating duplicate text literals across many translation units


Our company's static analysis tool is stating that there are duplicated strings (text literals). The issue is that they are spread across many translation units (source files). For example, the string "NULL console pointer" exists 1 time in module_a.c, 5 times in module_b.c and 1 time in module_f.c. We also have coding guidelines that state there is to be no global variables. We are to prefer to have no variables in header files. Our platform is an embedded system, so consolidation of constant text will provide room for other purposes (and make the program load faster). In other words, there should only be one instance of the text literal. So, what is an efficient design or architecture for consolidating constant text literals across multiple translation units? Is there a length limit where duplication is not worth consolidating (such as the string "rn"? We would prefer solutions that are performance efficient, such as preferring direct access over calling a getter function. (Note: at this time, the text does not need to be translated into multiple languages.) Languages: C and C++ (The code base is more C language than C++). Processor: ARM Cortex A8 Platform: Embedded system, safety and quality and performance critical (medical device). Compilers: IAR Embedded Workbench (for ARM processor). Edit 1: Linker Not Consolidating I scanned the BIN file and it does contain multiple instances of "NULL console pointer". The linker has the option "Merge duplicate section" and I checked that. The binary still contains duplicates.

Where does chromium create the context for a new window/frame in its source code?


I'm browsing chromium source to find where the context for a new window/frame is created (or linked to its frame object).

What I've found:

https://cs.chromium.org/chromium/src/v8/include/v8.h?type=cs&sq=package:chromium&l=7127

/**
* Creates a new context and returns a handle to the newly allocated
* context.
*
* param isolate The isolate in which to create the context.
*
* param extensions An optional extension configuration containing
* the extensions to be installed in the newly created context.
*
* param global_template An optional object template from which the
* global object for the newly created context will be created.
*
* param global_object An optional global object to be reused for
* the newly created context. This global object must have been
* created by a previous call to Context::New with the same global
* template. The state of the global object will be completely reset
* and only object identify will remain.
*/

static Local<Context> New(
    Isolate* isolate, ExtensionConfiguration* extensions = NULL,
    Local<ObjectTemplate> global_template = Local<ObjectTemplate>(),
    Local<Value> global_object = Local<Value>(),
    size_t context_snapshot_index = 0);

This method creates a V8 context, but I want to know where a frame object is linked to its context.


Strange post-increment behaviour in C++


I have a friend who is getting different output than I do for the following program:

int main() {
    int x = 20, y = 35;
    x = y++ + x++;
    y = ++y + ++x;

    printf("%d%d", x, y);

    return 0;
}

I am using Ubuntu, and have tried using gcc and clang. I get 5693 from both.

My friend is using Visual Studio 2015, and gets 5794.

The answer I get (5693) makes most sense to me, since:

  1. the first line sets x = x + y (which is x = 20+35 = 55) (note: x was incremented, but assigned over top of, so doesn't matter)
  2. y was incremented and is therefore 36
  3. next line increments both, adds the result and sets it as y (which is y = 37 + 56 = 93)
  4. which would be 56 and 93, so the output is 5693

I could see the VS answer making sense if the post-increment happened after the assignment. Is there some spec that makes one of these answers more right than the other? Is it just ambiguous? Should we fire anyone who writes code like this, making the ambiguity irrelevant?

Note: Initially, we only tried with gcc, however clang gives this warning:

coatedmoose@ubuntu:~/playground$ clang++ strange.cpp 
strange.cpp:8:16: warning: multiple unsequenced modifications to 'x' [-Wunsequenced]
    x = y++ + x++;
      ~        ^
1 warning generated.

UIWebView Desktop Website Won't Work


Partly as an exercise for learning a little iOS programming, and partly because I wish I had a WhatsApp client on iPad, I am trying to create an app that I can personally use as a WhatsApp client for my iPad. All it does is load up the web.whatsapp.com desktop site in a UIWebView like so: override func viewDidLoad() { NSUserDefaults.standardUserDefaults().registerDefaults(["UserAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/601.5.17 (KHTML, like Gecko) Version/9.1 Safari/601.5.17"]) super.viewDidLoad() self.webView.frame = self.view.bounds self.webView.scalesPageToFit = true // Do any additional setup after loading the view, typically from a nib. let url = NSURL(string: "https://web.whatsapp.com") let requestObj = NSMutableURLRequest(URL: url!) webView.loadRequest(requestObj) //webView.frame = CGRectMake(0, 0, self.view.frame.size.width, self.view.frame.size.height); } This works okay. It does in fact load the correct webapp, rather than redirecting to the whatsapp home page as would usually happen when the server detects a mobile device. However, rather than presenting me with the QR Code screen for log in, it presents me with this: Now, if I use WhatsApp Web from Safari on my iPad (and requesting Desktop version), it works perfectly fine. As you can see, I am requesting the Desktop site for my UIWebView by setting the UserAgent. Now, I am wondering why it would not work in the UIWebView, and whether perhaps there is some other header or value that needs to be set in order to convince the App to work within my UIWebView control?

Why can I not place Master and Detail view next to each other in UISplitViewController on the first run, but upon rotation it works?


I have a split view controller that has a list of items on the left and a detail view on the right. Relevant code in AppDelegate:

let splitViewController = mainView.instantiateViewControllerWithIdentifier("initial") as! UISplitViewController



        let rightNavController = splitViewController.viewControllers.last as! UINavigationController
        let detailViewController = rightNavController.topViewController as! DetailsIpad

        let leftNavController = splitViewController.viewControllers.first as! UINavigationController
        let masterViewController = leftNavController.topViewController as! MainViewController

        masterSplitViewController = masterViewController
        detailSplitViewController = detailViewController

        // Override point for customization after application launch.
        let navigationController = splitViewController.viewControllers[splitViewController.viewControllers.count-1] as! UINavigationController
        navigationController.topViewController!.navigationItem.leftBarButtonItem = splitViewController.displayModeButtonItem()
        splitViewController.delegate = self

        self.window!.rootViewController = splitViewController

When I first launch the app I see that the right part of the split screen takes up all of the screen:

enter image description here

If I rotate the screen, it becomes properly set (probably because both views are present on the screen):

enter image description here

When I set breakpoints everywhere, I see that the detail view on the right gets loaded before the master view on the left (list of items), despite not being called directly. I cannot change the order in which the views of the split screen are called. How can I fix this?


Generating a random 2D array


I want to make a simple program where a user can input "i x y" where x and y are integers, dimensions of the array. I have made a class myarray which makes the matrix. However the output of the program is blank spaces and n. Does anyone know what can I do to fix it?

#include <iostream>
#include <cstdlib>
#include <ctime>

using namespace std;

class myarray
{
    char** grid;
    int dimX,dimY;
public:
    myarray(){grid=0;}
    myarray(int m,int n) {grid = new char* [m]; for(int i=0;i<m;i++) {grid[i]=new char [n];} dimX=m; dimY=n;}
    ~myarray(){for(int i = 0; i < dimX; ++i) {delete[] grid[i];} delete[] grid;}
    char** fetcharray(){return grid;}

    void display_grid();
    void randomize_grid(){for(int i=0;i<dimX;i++) for(int j=0;j<dimY;j++) grid[i][j]=rand()%10;}
};

int main()
{
    srand(time(NULL));
    bool check(true);
    while(check)
    {
        char a; //a-firstinp;
        int m,n; //m,n-grid size
        cin>>a;

        switch(a)
        {
        case 'i':
        case 'I': {cin>>m>>n;
                  myarray c(m,n);
                  c.randomize_grid();
                  c.display_grid();
                  break;}
        default: {cout<<"Invalid input! Possible commands: i,c,l,v,h,k,f,s,x! Try again: n";
                  break;}
        }
    }
}

void myarray::display_grid()
{
    for(int i=0;i<dimX;i++)
    {
        cout<<"n";
        for(int j=0;j<dimY;j++)
            cout<<grid[i][j];
    }
}

Thank you in advance!


boost::asio::io_service.post() background thread memory leak


I want to run boost::asio::io_service.run() in a background thread. So when I need it post() func into.

This is main func:

int main(int /*argc*/, char** /*argv*/)
{
    std::string message = "hello";

    logg = new logger_client(filename,ip,13666);
    logg->start();

    while (true)
        logg->add_string(message);

    return 0;
}

And some relevant funcs from logger_client:

std::auto_ptr<boost::asio::io_service::work> work;

logger_client::logger_client(std::string& filename,std::string& ip, uint16_t port) : work(new boost::asio::io_service::work(io_service))
{
}

void logger_client::start()
{
    ios_thread = new boost::thread(boost::bind(&io_service.run,&io_service));
}

void print_nothing()
{
    printf("%sn","lie");
}

void logger_client::add_string(std::string& message)
{
    io_service.post(boost::bind(print_nothing));
    //io_service.post(strand->wrap(boost::bind(&logger_client::add_string_imp,this,message)));
    //io_service.run();
}

When i run this, my program eats 2Gb less than a minute. If i remove endless work and change to this:

void logger_client::add_string(std::string& message)
{
    io_service.post(boost::bind(print_nothing));
    //io_service.post(strand->wrap(boost::bind(&logger_client::add_string_imp,this,message)));
    io_service.run();
}

Program works just fine. But I don't want to invoke async operations on this (main) thread. What am i doing wrong?


How to create a circular dotted line as a CALayer?


I read this post Draw dotted (not dashed!) line about drawing a dotted line (rather than a dashed line). However, I am not too familiar with graphics generally and I'm wondering how I can do this with a CALayer (so I don't have to do the whole get current graphics context thing).

I am trying to produce a dotted line that looks like this (the white part, ignore the background):

dotted line

Here's the code I have working to produce a dotted line:

CAShapeLayer *shapelayer = [CAShapeLayer layer];
UIBezierPath *path = [UIBezierPath bezierPath];
[path moveToPoint:startPoint];
[path addLineToPoint:endPoint];
[path setLineCapStyle:kCGLineCapRound];
UIColor *fill = [UIColor whiteColor];
shapelayer.strokeStart = 0.0;
shapelayer.strokeColor = fill.CGColor;
shapelayer.lineWidth = 4.0;
shapelayer.lineJoin = kCALineJoinRound;
shapelayer.lineDashPattern = [NSArray arrayWithObjects:[NSNumber numberWithInt:4],[NSNumber numberWithInt:6 ], nil];
shapelayer.path = path.CGPath;

return shapelayer;

How can I mirror the code in the SO post I referenced but continue using a CALayer?

I tried modifying the code from that post like so:

UIBezierPath * path = [[UIBezierPath alloc] init];
[path moveToPoint:startPoint];
[path addLineToPoint:endPoint];
[path setLineWidth:8.0];
CGFloat dashes[] = { path.lineWidth, path.lineWidth * 2 };
[path setLineDash:dashes count:2 phase:0];
[path setLineCapStyle:kCGLineCapRound];
[path stroke];

CAShapeLayer *returnLayer = [CAShapeLayer layer];
returnLayer.path = path.CGPath;
return returnLayer;

However, this ends up drawing nothing.


Where exactly is the red zone on x86-64?


From Wikipedia:

In computing, a red zone is a fixed-size area in a function's stack frame beyond the return address which is not preserved by that function. The callee function may use the red zone for storing local variables without the extra overhead of modifying the stack pointer. This region of memory is not to be modified by interrupt/exception/signal handlers. The x86-64 ABI used by System V mandates a 128-byte red zone, which begins directly after the return address and includes the function's arguments. The OpenRISC toolchain assumes a 128-byte red zone.

From the System V x86-64 ABI:

The 128-byte area beyond the location pointed to by %rsp is considered to be reserved and shall not be modified by signal or interrupt handlers. Therefore, functions may use this area for temporary data that is not needed across function calls. In particular, leaf functions may use this area for their entire stack frame, rather than adjusting the stack pointer in the prologue and epilogue. This area is known as the red zone.

  • Given these two quotes, is the red zone above the stacked return address or below the stacked return address?

  • Since this red zone is relative to RSP, does it move downward with each push and does it move upward with each pop?


How do I derive and add new arguments to the base version of the constructor?


I'm trying to extend a base class with some data members and hence need some additional constructor arguments in addition to the constructor arguments that my base class needs. I want to forward the first constructor arguments to the base class. Here's what I tried:

#include <string>
#include <utility>

struct X
{
    X( int i_ ) : i(i_) {}
    int i;
};

struct Y : X
{
    template <typename ...Ts>        // note: candidate constructor not viable: 
    Y( Ts&&...args, std::string s_ ) // requires single argument 's_', but 2 arguments 
//  ^                                // were provided
    : X( std::forward<Ts>(args)... )
    , s( std::move(s_) )
    {}

    std::string s;
};

int main()
{
    Y y( 1, "" ); // error: no matching constructor for initialization of 'Y'
//    ^  ~~~~~
}

However, the compiler (clang 3.8, C++14 mode) spits the following error messages at me (the main messages are also in the above source code for reading convenience):

main.cpp:23:7: error: no matching constructor for initialization of 'Y'
    Y y( 1, "" );
      ^  ~~~~~
main.cpp:13:5: note: candidate constructor not viable: requires single argument 's_', but 2 arguments were provided
    Y( Ts&&...args, std::string s_ )
    ^
main.cpp:10:8: note: candidate constructor (the implicit move constructor) not viable: requires 1 argument, but 2 were provided
struct Y : X
       ^
main.cpp:10:8: note: candidate constructor (the implicit copy constructor) not viable: requires 1 argument, but 2 were provided
1 error generated.

Why is clang trying to tell me, that my templated constructor has only one arguments, even though the number of arguments is variadic? How can I solve this?


Why my compiler is showing some error's that shouldn't exist?


I've got 2 errors that makes me sick and a little bit confused. Error #1: error C2679: binary '+=' : no operator found which takes a right-hand operand of type 'std::basic_string<_Elem,_Traits,_Ax>' (or there is no acceptable conversion) Code to this error is: CString lancuch1; lancuch1 = "Znaleziono "; lancuch1 += liczba1.str(); lancuch1 += " pozycji."; And the second one, more weird: Error #2: error C2440: 'initializing' : cannot convert from 'std::_Vector_iterator<_Ty,_Alloc>' to 'std::basic_string<_Elem,_Traits,_Ax>' And this error i've got 7 times written to this code: for(int i = 0 ; i < pojemnosc_vectora; i++){ std::string linijka = (vector.begin()+i); char deli = ';'; int a = 0; for(int i = 0; i<5; i++){ std::string pokico = linijka.substr(a, deli); vector2.push_back(pokico); a+=pokico.length(); } } int licznik_komunikatow=0; for(int i=0; i<vector.size(); i++){ std::string komunikat1 = vector2.begin()+(licznik_komunikatow); std::string komunikat2 = vector2.begin()+(licznik_komunikatow+1); std::string komunikat3 = vector2.begin()+(licznik_komunikatow+2); std::string komunikat4 = vector2.begin()+(licznik_komunikatow+3); std::string komunikat5 = vector2.begin()+(licznik_komunikatow+4); CString komun,komun1,komun2,komun3,komun4; komun = komunikat1.c_str(); komun1 = komunikat2.c_str(); komun2 = komunikat3.c_str(); komun3 = komunikat4.c_str(); komun4 = komunikat5.c_str(); printf("Nazwa: %s n Cena: %s n Ilość: %s n Gdzie: %s n Kod: %s n ", komun, komun1, komun2, komun3, komun4 ); } Tell me is it my bad or Visual's 2005 bad. I'm a little bit tired of weird error's that I don't really understand. Anyone have an Idea how to fix this?